TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and ...

EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World

Yifei HUANG, Guo Chen, Jilan Xu, Mingfang Zhang, Lijin Yang, Baoqi Pei, Hongjie Zhang, Lu Dong, Yali Wang, LiMin Wang, Yu Qiao

2024-03-24CVPR 2024 1Video RetrievalLong Term AnticipationAction AnticipationAction Quality Assessment
PaperPDFCode(official)

Abstract

Being able to map the activities of others into one's own point of view is one fundamental human skill even from a very early age. Taking a step toward understanding this human ability, we introduce EgoExoLearn, a large-scale dataset that emulates the human demonstration following process, in which individuals record egocentric videos as they execute tasks guided by demonstration videos. Focusing on the potential applications in daily assistance and professional support, EgoExoLearn contains egocentric and demonstration video data spanning 120 hours captured in daily life scenarios and specialized laboratories. Along with the videos we record high-quality gaze data and provide detailed multimodal annotations, formulating a playground for modeling the human ability to bridge asynchronous procedural actions from different viewpoints. To this end, we present benchmarks such as cross-view association, cross-view action planning, and cross-view referenced skill assessment, along with detailed analysis. We expect EgoExoLearn can serve as an important resource for bridging the actions across views, thus paving the way for creating AI agents capable of seamlessly learning by observing humans in the real world. Code and data can be found at: https://github.com/OpenGVLab/EgoExoLearn

Results

TaskDatasetMetricValueModel
VideoEgoExoLearnAccuracy48.35cross-view association baseline (gaze, val)
VideoEgoExoLearnAccuracy44.15cross-view association baseline (no gaze, val)
Activity RecognitionEgoExoLearnAccuracy45.45Action anticipation baseline (co-training, with gaze)
Activity RecognitionEgoExoLearnAccuracy38.7Action anticipation baseline (co-training, no gaze)
Action RecognitionEgoExoLearnAccuracy45.45Action anticipation baseline (co-training, with gaze)
Action RecognitionEgoExoLearnAccuracy38.7Action anticipation baseline (co-training, no gaze)
Action Quality AssessmentEgoExoLearnAccuracy81.27RAAN+TL+Gaze
Action Quality AssessmentEgoExoLearnAccuracy79.875RAAN+TL
Action AnticipationEgoExoLearnAccuracy45.45Action anticipation baseline (co-training, with gaze)
Action AnticipationEgoExoLearnAccuracy38.7Action anticipation baseline (co-training, no gaze)
Video RetrievalEgoExoLearnAccuracy48.35cross-view association baseline (gaze, val)
Video RetrievalEgoExoLearnAccuracy44.15cross-view association baseline (no gaze, val)
2D Human Pose EstimationEgoExoLearnAccuracy45.45Action anticipation baseline (co-training, with gaze)
2D Human Pose EstimationEgoExoLearnAccuracy38.7Action anticipation baseline (co-training, no gaze)
Action Recognition In VideosEgoExoLearnAccuracy45.45Action anticipation baseline (co-training, with gaze)
Action Recognition In VideosEgoExoLearnAccuracy38.7Action anticipation baseline (co-training, no gaze)

Related Papers

Q2E: Query-to-Event Decomposition for Zero-Shot Multilingual Text-to-Video Retrieval2025-06-11MAGMaR Shared Task System Description: Video Retrieval with OmniEmbed2025-06-11V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning2025-06-11From Play to Replay: Composed Video Retrieval for Temporally Fine-Grained Videos2025-06-05Leveraging Auxiliary Information in Text-to-Video Retrieval: A Review2025-05-29Learning World Models for Interactive Video Generation2025-05-28PHI: Bridging Domain Shift in Long-Term Action Quality Assessment via Progressive Hierarchical Instruction2025-05-26LoVR: A Benchmark for Long Video Retrieval in Multimodal Contexts2025-05-20