TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Chat-UniVi: Unified Visual Representation Empowers Large L...

Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding

Peng Jin, Ryuichi Takanobu, Wancai Zhang, Xiaochun Cao, Li Yuan

2023-11-14CVPR 2024 1Zero-Shot Video Question AnswerVCGBench-DiverseVideo-based Generative Performance BenchmarkingVideo-based Generative Performance Benchmarking (Contextual Understanding)Video-based Generative Performance Benchmarking (Correctness of Information)Video Question AnsweringScience Question AnsweringVideo-based Generative Performance Benchmarking (Consistency)Video-based Generative Performance Benchmarking (Temporal Understanding)Video-based Generative Performance Benchmarking (Detail Orientation))Video UnderstandingLanguage Modelling
PaperPDFCodeCode(official)CodeCode

Abstract

Large language models have demonstrated impressive universal capabilities across a wide range of open-ended tasks and have extended their utility to encompass multimodal conversations. However, existing methods encounter challenges in effectively handling both image and video understanding, particularly with limited visual tokens. In this work, we introduce Chat-UniVi, a Unified Vision-language model capable of comprehending and engaging in conversations involving images and videos through a unified visual representation. Specifically, we employ a set of dynamic visual tokens to uniformly represent images and videos. This representation framework empowers the model to efficiently utilize a limited number of visual tokens to simultaneously capture the spatial details necessary for images and the comprehensive temporal relationship required for videos. Moreover, we leverage a multi-scale representation, enabling the model to perceive both high-level semantic concepts and low-level visual details. Notably, Chat-UniVi is trained on a mixed dataset containing both images and videos, allowing direct application to tasks involving both mediums without requiring any modifications. Extensive experimental results demonstrate that Chat-UniVi consistently outperforms even existing methods exclusively designed for either images or videos. Code is available at https://github.com/PKU-YuanGroup/Chat-UniVi.

Results

TaskDatasetMetricValueModel
Question AnsweringMSVD-QAAccuracy69.3Chat-UniVi-7B
Question AnsweringMSVD-QAConfidence Score3.7Chat-UniVi-7B
Question AnsweringTGIF-QAAccuracy69Chat-UniVi-7B
Question AnsweringTGIF-QAConfidence Score3.8Chat-UniVi-7B
Question AnsweringMSRVTT-QAAccuracy55Chat-UniVi-7B
Question AnsweringMSRVTT-QAConfidence Score3.1Chat-UniVi-7B
Question AnsweringActivityNet-QAAccuracy46.4Chat-UniVi-13B
Question AnsweringActivityNet-QAConfidence Score3.6Chat-UniVi-13B
Question AnsweringActivityNet-QAAccuracy46.1Chat-UniVi
Question AnsweringActivityNet-QAConfidence Score3.3Chat-UniVi
Question AnsweringScienceQAAvg. Accuracy90.99Chat-UniVi-13B
Question AnsweringScienceQAGrades 1-691.19Chat-UniVi-13B
Question AnsweringScienceQAGrades 7-1290.64Chat-UniVi-13B
Question AnsweringScienceQAImage Context88.05Chat-UniVi-13B
Question AnsweringScienceQALanguage Science88.91Chat-UniVi-13B
Question AnsweringScienceQANatural Science90.41Chat-UniVi-13B
Question AnsweringScienceQANo Context90.94Chat-UniVi-13B
Question AnsweringScienceQASocial Science95.05Chat-UniVi-13B
Question AnsweringScienceQAText Context89.64Chat-UniVi-13B
Visual Question Answering (VQA)VideoInstructConsistency2.81Chat-UniVi
Visual Question Answering (VQA)VideoInstructContextual Understanding3.46Chat-UniVi
Visual Question Answering (VQA)VideoInstructCorrectness of Information2.89Chat-UniVi
Visual Question Answering (VQA)VideoInstructDetail Orientation2.91Chat-UniVi
Visual Question Answering (VQA)VideoInstructTemporal Understanding2.39Chat-UniVi
Visual Question Answering (VQA)VideoInstructmean2.99Chat-UniVi
Visual Question Answering (VQA)VideoInstructgpt-score3.46Chat-UniVi
Visual Question Answering (VQA)VideoInstructgpt-score2.89Chat-UniVi
Visual Question Answering (VQA)VideoInstructgpt-score2.91Chat-UniVi
Visual Question Answering (VQA)VideoInstructgpt-score2.39Chat-UniVi
Visual Question Answering (VQA)VideoInstructgpt-score2.81Chat-UniVi
Video Question AnsweringActivityNet-QAAccuracy46.4Chat-UniVi-13B
Video Question AnsweringActivityNet-QAConfidence score3.3Chat-UniVi-13B
Video Question AnsweringMSVD-QAAccuracy69.3Chat-UniVi-7B
Video Question AnsweringMSVD-QAConfidence Score3.7Chat-UniVi-7B
Video Question AnsweringTGIF-QAAccuracy69Chat-UniVi-7B
Video Question AnsweringTGIF-QAConfidence Score3.8Chat-UniVi-7B
Video Question AnsweringMSRVTT-QAAccuracy55Chat-UniVi-7B
Video Question AnsweringMSRVTT-QAConfidence Score3.1Chat-UniVi-7B
Video Question AnsweringActivityNet-QAAccuracy46.4Chat-UniVi-13B
Video Question AnsweringActivityNet-QAConfidence Score3.6Chat-UniVi-13B
Video Question AnsweringActivityNet-QAAccuracy46.1Chat-UniVi
Video Question AnsweringActivityNet-QAConfidence Score3.3Chat-UniVi
Generative Visual Question AnsweringVideoInstructConsistency2.81Chat-UniVi
Generative Visual Question AnsweringVideoInstructContextual Understanding3.46Chat-UniVi
Generative Visual Question AnsweringVideoInstructCorrectness of Information2.89Chat-UniVi
Generative Visual Question AnsweringVideoInstructDetail Orientation2.91Chat-UniVi
Generative Visual Question AnsweringVideoInstructTemporal Understanding2.39Chat-UniVi
Generative Visual Question AnsweringVideoInstructmean2.99Chat-UniVi
Generative Visual Question AnsweringVideoInstructgpt-score3.46Chat-UniVi
Generative Visual Question AnsweringVideoInstructgpt-score2.89Chat-UniVi
Generative Visual Question AnsweringVideoInstructgpt-score2.91Chat-UniVi
Generative Visual Question AnsweringVideoInstructgpt-score2.39Chat-UniVi
Generative Visual Question AnsweringVideoInstructgpt-score2.81Chat-UniVi
Video-based Generative Performance Benchmarking (Correctness of Information)VideoInstructgpt-score2.89Chat-UniVi
Video-based Generative Performance BenchmarkingVideoInstructConsistency2.81Chat-UniVi
Video-based Generative Performance BenchmarkingVideoInstructContextual Understanding3.46Chat-UniVi
Video-based Generative Performance BenchmarkingVideoInstructCorrectness of Information2.89Chat-UniVi
Video-based Generative Performance BenchmarkingVideoInstructDetail Orientation2.91Chat-UniVi
Video-based Generative Performance BenchmarkingVideoInstructTemporal Understanding2.39Chat-UniVi
Video-based Generative Performance BenchmarkingVideoInstructmean2.99Chat-UniVi
Video-based Generative Performance BenchmarkingVideoInstructgpt-score3.46Chat-UniVi
Video-based Generative Performance BenchmarkingVideoInstructgpt-score2.89Chat-UniVi
Video-based Generative Performance BenchmarkingVideoInstructgpt-score2.91Chat-UniVi
Video-based Generative Performance BenchmarkingVideoInstructgpt-score2.39Chat-UniVi
Video-based Generative Performance BenchmarkingVideoInstructgpt-score2.81Chat-UniVi
VCGBench-DiverseVideoInstructConsistency2.36Chat-UniVi
VCGBench-DiverseVideoInstructContextual Understanding2.66Chat-UniVi
VCGBench-DiverseVideoInstructCorrectness of Information2.29Chat-UniVi
VCGBench-DiverseVideoInstructDense Captioning1.33Chat-UniVi
VCGBench-DiverseVideoInstructDetail Orientation2.56Chat-UniVi
VCGBench-DiverseVideoInstructReasoning3.59Chat-UniVi
VCGBench-DiverseVideoInstructSpatial Understanding2.36Chat-UniVi
VCGBench-DiverseVideoInstructTemporal Understanding1.56Chat-UniVi
VCGBench-DiverseVideoInstructmean2.29Chat-UniVi

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16