TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/EgoVideo: Exploring Egocentric Foundation Model and Downst...

EgoVideo: Exploring Egocentric Foundation Model and Downstream Adaptation

Baoqi Pei, Guo Chen, Jilan Xu, Yuping He, Yicheng Liu, Kanghua Pan, Yifei HUANG, Yali Wang, Tong Lu, LiMin Wang, Yu Qiao

2024-06-26Long Term Action AnticipationShort-term Object Interaction AnticipationMoment QueriesAction AnticipationMulti-Instance RetrievalAction RecognitionNatural Language QueriesDomain Adaptation
PaperPDFCode(official)

Abstract

In this report, we present our solutions to the EgoVis Challenges in CVPR 2024, including five tracks in the Ego4D challenge and three tracks in the EPIC-Kitchens challenge. Building upon the video-language two-tower model and leveraging our meticulously organized egocentric video data, we introduce a novel foundation model called EgoVideo. This model is specifically designed to cater to the unique characteristics of egocentric videos and provides strong support for our competition submissions. In the Ego4D challenges, we tackle various tasks including Natural Language Queries, Step Grounding, Moment Queries, Short-term Object Interaction Anticipation, and Long-term Action Anticipation. In addition, we also participate in the EPIC-Kitchens challenge, where we engage in the Action Recognition, Multiple Instance Retrieval, and Domain Adaptation for Action Recognition tracks. By adapting EgoVideo to these diverse tasks, we showcase its versatility and effectiveness in different egocentric video analysis scenarios, demonstrating the powerful representation ability of EgoVideo as an egocentric foundation model. Our codebase and pretrained models are publicly available at https://github.com/OpenGVLab/EgoVideo.

Results

TaskDatasetMetricValueModel
Short-term Object Interaction AnticipationEgo4DNoun (Top5 mAP)31.08EgoVideo
Short-term Object Interaction AnticipationEgo4DNoun+TTC (Top5 mAP)12.41EgoVideo
Short-term Object Interaction AnticipationEgo4DNoun+Verb(Top5 mAP)16.18EgoVideo
Short-term Object Interaction AnticipationEgo4DOverall (Top5 mAP)7.21EgoVideo
Natural Language QueriesEgo4DR@1 IoU=0.328.05EgoVideo
Natural Language QueriesEgo4DR@1 IoU=0.519.31EgoVideo
Natural Language QueriesEgo4DR@1 Mean(0.3 and 0.5)23.68EgoVideo
Natural Language QueriesEgo4DR@5 IoU=0.344.16EgoVideo
Natural Language QueriesEgo4DR@5 IoU=0.531.37EgoVideo

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17Domain Borders Are There to Be Crossed With Federated Few-Shot Adaptation2025-07-14An Offline Mobile Conversational Agent for Mental Health Support: Learning from Emotional Dialogues and Psychological Texts with Student-Centered Evaluation2025-07-11The Bayesian Approach to Continual Learning: An Overview2025-07-11Doodle Your Keypoints: Sketch-Based Few-Shot Keypoint Detection2025-07-10YOLO-APD: Enhancing YOLOv8 for Robust Pedestrian Detection on Complex Road Geometries2025-07-07CORE-ReID V2: Advancing the Domain Adaptation for Object Re-Identification with Optimized Training and Ensemble Fusion2025-07-04