TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/VindLU: A Recipe for Effective Video-and-Language Pretrain...

VindLU: A Recipe for Effective Video-and-Language Pretraining

Feng Cheng, Xizi Wang, Jie Lei, David Crandall, Mohit Bansal, Gedas Bertasius

2022-12-09CVPR 2023 1Question AnsweringVideo RetrievalText to Video RetrievalVideo Question AnsweringRetrieval
PaperPDFCode(official)

Abstract

The last several years have witnessed remarkable progress in video-and-language (VidL) understanding. However, most modern VidL approaches use complex and specialized model architectures and sophisticated pretraining protocols, making the reproducibility, analysis and comparisons of these frameworks difficult. Hence, instead of proposing yet another new VidL model, this paper conducts a thorough empirical study demystifying the most important factors in the VidL model design. Among the factors that we investigate are (i) the spatiotemporal architecture design, (ii) the multimodal fusion schemes, (iii) the pretraining objectives, (iv) the choice of pretraining data, (v) pretraining and finetuning protocols, and (vi) dataset and model scaling. Our empirical study reveals that the most important design factors include: temporal modeling, video-to-text multimodal fusion, masked modeling objectives, and joint training on images and videos. Using these empirical insights, we then develop a step-by-step recipe, dubbed VindLU, for effective VidL pretraining. Our final model trained using our recipe achieves comparable or better than state-of-the-art results on several VidL tasks without relying on external CLIP pretraining. In particular, on the text-to-video retrieval task, our approach obtains 61.2% on DiDeMo, and 55.0% on ActivityNet, outperforming current SOTA by 7.8% and 6.1% respectively. Furthermore, our model also obtains state-of-the-art video question-answering results on ActivityNet-QA, MSRVTT-QA, MSRVTT-MC and TVQA. Our code and pretrained models are publicly available at: https://github.com/klauscc/VindLU.

Results

TaskDatasetMetricValueModel
VideoCondensed Moviestext-to-video R@118.4VINDLU
VideoCondensed Moviestext-to-video R@1044.3VINDLU
VideoCondensed Moviestext-to-video R@536.4VINDLU
VideoMSR-VTT-1kAtext-to-video R@146.5VindLU
VideoMSR-VTT-1kAtext-to-video R@1080.4VindLU
VideoMSR-VTT-1kAtext-to-video R@571.5VindLU
VideoSSv2-template retrievaltext-to-video R@183.3VindLU
VideoSSv2-template retrievaltext-to-video R@10100VindLU
VideoSSv2-template retrievaltext-to-video R@5100VindLU
VideoActivityNettext-to-video R@155VindLU
VideoActivityNettext-to-video R@1089.7VindLU
VideoActivityNettext-to-video R@581.4VindLU
VideoSSv2-label retrievaltext-to-video R@153.1VindLU
VideoSSv2-label retrievaltext-to-video R@581.8VindLU
VideoDiDeMotext-to-video R@161.2VindLU
VideoDiDeMotext-to-video R@1091VindLU
VideoDiDeMotext-to-video R@585.8VindLU
VideoQuerYDtext-to-video R@167.8VINDLU
VideoQuerYDtext-to-video R@1081.8VINDLU
VideoQuerYDtext-to-video R@586.3VINDLU
Video Question AnsweringTVQAAccuracy79VindLU
Video Question AnsweringActivityNet-QAAccuracy44.7VindLU
Video Question AnsweringMSRVTT-QAAccuracy44.6VindLU
Video Question AnsweringMSRVTT-MCAccuracy95.5VindLU
Video RetrievalCondensed Moviestext-to-video R@118.4VINDLU
Video RetrievalCondensed Moviestext-to-video R@1044.3VINDLU
Video RetrievalCondensed Moviestext-to-video R@536.4VINDLU
Video RetrievalMSR-VTT-1kAtext-to-video R@146.5VindLU
Video RetrievalMSR-VTT-1kAtext-to-video R@1080.4VindLU
Video RetrievalMSR-VTT-1kAtext-to-video R@571.5VindLU
Video RetrievalSSv2-template retrievaltext-to-video R@183.3VindLU
Video RetrievalSSv2-template retrievaltext-to-video R@10100VindLU
Video RetrievalSSv2-template retrievaltext-to-video R@5100VindLU
Video RetrievalActivityNettext-to-video R@155VindLU
Video RetrievalActivityNettext-to-video R@1089.7VindLU
Video RetrievalActivityNettext-to-video R@581.4VindLU
Video RetrievalSSv2-label retrievaltext-to-video R@153.1VindLU
Video RetrievalSSv2-label retrievaltext-to-video R@581.8VindLU
Video RetrievalDiDeMotext-to-video R@161.2VindLU
Video RetrievalDiDeMotext-to-video R@1091VindLU
Video RetrievalDiDeMotext-to-video R@585.8VindLU
Video RetrievalQuerYDtext-to-video R@167.8VINDLU
Video RetrievalQuerYDtext-to-video R@1081.8VINDLU
Video RetrievalQuerYDtext-to-video R@586.3VINDLU

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16