TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/EgoVLPv2: Egocentric Video-Language Pre-training with Fusi...

EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone

Shraman Pramanick, Yale Song, Sayan Nag, Kevin Qinghong Lin, Hardik Shah, Mike Zheng Shou, Rama Chellappa, Pengchuan Zhang

2023-07-11ICCV 2023 1Question AnsweringMoment QueriesMulti-Instance RetrievalVideo SummarizationAction RecognitionNatural Language Queries
PaperPDFCode(official)

Abstract

Video-language pre-training (VLP) has become increasingly important due to its ability to generalize to various vision and language tasks. However, existing egocentric VLP frameworks utilize separate video and language encoders and learn task-specific cross-modal information only during fine-tuning, limiting the development of a unified system. In this work, we introduce the second generation of egocentric video-language pre-training (EgoVLPv2), a significant improvement from the previous generation, by incorporating cross-modal fusion directly into the video and language backbones. EgoVLPv2 learns strong video-text representation during pre-training and reuses the cross-modal attention modules to support different downstream tasks in a flexible and efficient manner, reducing fine-tuning costs. Moreover, our proposed fusion in the backbone strategy is more lightweight and compute-efficient than stacking additional fusion-specific layers. Extensive experiments on a wide range of VL tasks demonstrate the effectiveness of EgoVLPv2 by achieving consistent state-of-the-art performance over strong baselines across all downstream. Our project page can be found at https://shramanpramanick.github.io/EgoVLPv2/.

Results

TaskDatasetMetricValueModel
VideoQuery-Focused Video Summarization DatasetF1 (avg)52.08EgoVLPv2
Question AnsweringEgoTaskQADirect46.26EgoVLPv2
Activity RecognitionCharades-EgomAP34.1EgoVLPv2
Video SummarizationQuery-Focused Video Summarization DatasetF1 (avg)52.08EgoVLPv2
Action RecognitionCharades-EgomAP34.1EgoVLPv2
Natural Language QueriesEgo4DR@1 IoU=0.312.95EgoVLPv2
Natural Language QueriesEgo4DR@1 IoU=0.57.91EgoVLPv2
Natural Language QueriesEgo4DR@5 IoU=0.323.8EgoVLPv2
Natural Language QueriesEgo4DR@5 IoU=0.516.11EgoVLPv2

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Warehouse Spatial Question Answering with LLM Agent2025-07-14