TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Egocentric Video-Language Pretraining

Egocentric Video-Language Pretraining

Kevin Qinghong Lin, Alex Jinpeng Wang, Mattia Soldan, Michael Wray, Rui Yan, Eric Zhongcong Xu, Difei Gao, RongCheng Tu, Wenzhe Zhao, Weijie Kong, Chengfei Cai, Hongfa Wang, Dima Damen, Bernard Ghanem, Wei Liu, Mike Zheng Shou

2022-06-03Question AnsweringVideo-Text RetrievalText RetrievalMoment QueriesMulti-Instance RetrievalVideo SummarizationContrastive LearningTemporal LocalizationObject State Change ClassificationAction RecognitionRetrievalNatural Language Queries
PaperPDFCodeCode(official)

Abstract

Video-Language Pretraining (VLP), which aims to learn transferable representation to advance a wide range of video-text downstream tasks, has recently received increasing attention. Best performing works rely on large-scale, 3rd-person video-text datasets, such as HowTo100M. In this work, we exploit the recently released Ego4D dataset to pioneer Egocentric VLP along three directions. (i) We create EgoClip, a 1st-person video-text pretraining dataset comprising 3.8M clip-text pairs well-chosen from Ego4D, covering a large variety of human daily activities. (ii) We propose a novel pretraining objective, dubbed EgoNCE, which adapts video-text contrastive learning to the egocentric domain by mining egocentric-aware positive and negative samples. (iii) We introduce EgoMCQ, a development benchmark that is close to EgoClip and hence can support effective validation and fast exploration of our design decisions in EgoClip and EgoNCE. Furthermore, we demonstrate strong performance on five egocentric downstream tasks across three datasets: video-text retrieval on EPIC-KITCHENS-100; action recognition on Charades-Ego; natural language query, moment query, and object state change classification on Ego4D challenge benchmarks. The dataset and code are available at https://github.com/showlab/EgoVLP.

Results

TaskDatasetMetricValueModel
VideoQuery-Focused Video Summarization DatasetF1 (avg)49.72EgoVLP
Question AnsweringEgoTaskQADirect42.51EgoVLP
Activity RecognitionCharades-EgomAP32.1EgoVLP
Video SummarizationQuery-Focused Video Summarization DatasetF1 (avg)49.72EgoVLP
Action RecognitionCharades-EgomAP32.1EgoVLP
Natural Language QueriesEgo4DR@1 IoU=0.310.46EgoVLP
Natural Language QueriesEgo4DR@1 IoU=0.56.24EgoVLP
Natural Language QueriesEgo4DR@1 Mean(0.3 and 0.5)8.35EgoVLP
Natural Language QueriesEgo4DR@5 IoU=0.316.76EgoVLP
Natural Language QueriesEgo4DR@5 IoU=0.511.29EgoVLP

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17