TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Clover: Towards A Unified Video-Language Alignment and Fus...

Clover: Towards A Unified Video-Language Alignment and Fusion Model

Jingjia Huang, Yinan Li, Jiashi Feng, Xinglong Wu, Xiaoshuai Sun, Rongrong Ji

2022-07-16CVPR 2023 1Question AnsweringVideo RetrievalZero-Shot Video RetrievalText to Video RetrievalVideo Question AnsweringTGIF-TransitionVideo UnderstandingRetrievalVisual Question Answering (VQA)TGIF-ActionTGIF-FrameLanguage Modelling
PaperPDFCode(official)

Abstract

Building a universal Video-Language model for solving various video understanding tasks (\emph{e.g.}, text-video retrieval, video question answering) is an open challenge to the machine learning field. Towards this goal, most recent works build the model by stacking uni-modal and cross-modal feature encoders and train it with pair-wise contrastive pre-text tasks. Though offering attractive generality, the resulted models have to compromise between efficiency and performance. They mostly adopt different architectures to deal with different downstream tasks. We find this is because the pair-wise training cannot well \emph{align} and \emph{fuse} features from different modalities. We then introduce \textbf{Clover}\textemdash a Correlated Video-Language pre-training method\textemdash towards a universal Video-Language model for solving multiple video understanding tasks with neither performance nor efficiency compromise. It improves cross-modal feature alignment and fusion via a novel tri-modal alignment pre-training task. Additionally, we propose to enhance the tri-modal alignment via incorporating learning from semantic masked samples and a new pair-wise ranking loss. Clover establishes new state-of-the-arts on multiple downstream tasks, including three retrieval tasks for both zero-shot and fine-tuning settings, and eight video question answering tasks. Codes and pre-trained models will be released at \url{https://github.com/LeeYN-43/Clover}.

Results

TaskDatasetMetricValueModel
VideoMSR-VTT-1kAtext-to-video Median Rank2Clover
VideoMSR-VTT-1kAtext-to-video R@140.5Clover
VideoMSR-VTT-1kAtext-to-video R@1079.4Clover
VideoMSR-VTT-1kAtext-to-video R@569.8Clover
VideoDiDeMotext-to-video Median Rank1Clover
VideoDiDeMotext-to-video R@150.1Clover
VideoDiDeMotext-to-video R@1085.6Clover
VideoDiDeMotext-to-video R@576.7Clover
VideoLSMDCtext-to-video Median Rank8Clover
VideoLSMDCtext-to-video R@124.8Clover
VideoLSMDCtext-to-video R@1054.5Clover
VideoLSMDCtext-to-video R@544Clover
Visual Question Answering (VQA)MSRVTT-QAAccuracy0.441Clover
Visual Question Answering (VQA)MSVD-QAAccuracy0.524Clover
Video Question AnsweringLSMDC-FiBAccuracy54.1Clover
Video Question AnsweringLSMDC-MCAccuracy83.7Clover
Video Question AnsweringMSRVTT-MCAccuracy95.2Clover
Video RetrievalMSR-VTT-1kAtext-to-video Median Rank2Clover
Video RetrievalMSR-VTT-1kAtext-to-video R@140.5Clover
Video RetrievalMSR-VTT-1kAtext-to-video R@1079.4Clover
Video RetrievalMSR-VTT-1kAtext-to-video R@569.8Clover
Video RetrievalDiDeMotext-to-video Median Rank1Clover
Video RetrievalDiDeMotext-to-video R@150.1Clover
Video RetrievalDiDeMotext-to-video R@1085.6Clover
Video RetrievalDiDeMotext-to-video R@576.7Clover
Video RetrievalLSMDCtext-to-video Median Rank8Clover
Video RetrievalLSMDCtext-to-video R@124.8Clover
Video RetrievalLSMDCtext-to-video R@1054.5Clover
Video RetrievalLSMDCtext-to-video R@544Clover
Zero-Shot Video RetrievalMSR-VTTtext-to-video Median Rank6Clover
Zero-Shot Video RetrievalMSR-VTTtext-to-video R@126.4Clover
Zero-Shot Video RetrievalMSR-VTTtext-to-video R@1060Clover
Zero-Shot Video RetrievalMSR-VTTtext-to-video R@549.5Clover
Zero-Shot Video RetrievalDiDeMotext-to-video Median Rank4Clover
Zero-Shot Video RetrievalDiDeMotext-to-video R@129.5Clover
Zero-Shot Video RetrievalDiDeMotext-to-video R@1066.3Clover
Zero-Shot Video RetrievalDiDeMotext-to-video R@555.2Clover
Zero-Shot Video RetrievalLSMDCtext-to-video Median Rank24Clover
Zero-Shot Video RetrievalLSMDCtext-to-video R@114.7Clover
Zero-Shot Video RetrievalLSMDCtext-to-video R@1038.2Clover
Zero-Shot Video RetrievalLSMDCtext-to-video R@529.2Clover

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17