TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning to Anticipate Egocentric Actions by Imagination

Learning to Anticipate Egocentric Actions by Imagination

Yu Wu, Linchao Zhu, Xiaohan Wang, Yi Yang, Fei Wu

2021-01-13Action AnticipationAutonomous DrivingContrastive Learning
PaperPDF

Abstract

Anticipating actions before they are executed is crucial for a wide range of practical applications, including autonomous driving and robotics. In this paper, we study the egocentric action anticipation task, which predicts future action seconds before it is performed for egocentric videos. Previous approaches focus on summarizing the observed content and directly predicting future action based on past observations. We believe it would benefit the action anticipation if we could mine some cues to compensate for the missing information of the unobserved frames. We then propose to decompose the action anticipation into a series of future feature predictions. We imagine how the visual feature changes in the near future and then predicts future action labels based on these imagined representations. Differently, our ImagineRNN is optimized in a contrastive learning way instead of feature regression. We utilize a proxy task to train the ImagineRNN, i.e., selecting the correct future states from distractors. We further improve ImagineRNN by residual anticipation, i.e., changing its target to predicting the feature difference of adjacent frames instead of the frame content. This promotes the network to focus on our target, i.e., the future action, as the difference between adjacent frame features is more important for forecasting the future. Extensive experiments on two large-scale egocentric action datasets validate the effectiveness of our method. Our method significantly outperforms previous methods on both the seen test set and the unseen test set of the EPIC Kitchens Action Anticipation Challenge.

Results

TaskDatasetMetricValueModel
Activity RecognitionEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Act.14.66ImagineRNN
Activity RecognitionEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Noun22.79ImagineRNN
Activity RecognitionEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Verb35.44ImagineRNN
Activity RecognitionEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Act.34.98ImagineRNN
Activity RecognitionEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Noun52.09ImagineRNN
Activity RecognitionEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Verb79.72ImagineRNN
Activity RecognitionEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Act.9.25ImagineRNN
Activity RecognitionEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Noun15.5ImagineRNN
Activity RecognitionEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Verb29.33ImagineRNN
Activity RecognitionEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Act.22.19ImagineRNN
Activity RecognitionEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Noun35.78ImagineRNN
Activity RecognitionEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Verb70.67ImagineRNN
Action RecognitionEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Act.14.66ImagineRNN
Action RecognitionEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Noun22.79ImagineRNN
Action RecognitionEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Verb35.44ImagineRNN
Action RecognitionEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Act.34.98ImagineRNN
Action RecognitionEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Noun52.09ImagineRNN
Action RecognitionEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Verb79.72ImagineRNN
Action RecognitionEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Act.9.25ImagineRNN
Action RecognitionEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Noun15.5ImagineRNN
Action RecognitionEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Verb29.33ImagineRNN
Action RecognitionEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Act.22.19ImagineRNN
Action RecognitionEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Noun35.78ImagineRNN
Action RecognitionEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Verb70.67ImagineRNN
Action AnticipationEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Act.14.66ImagineRNN
Action AnticipationEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Noun22.79ImagineRNN
Action AnticipationEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Verb35.44ImagineRNN
Action AnticipationEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Act.34.98ImagineRNN
Action AnticipationEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Noun52.09ImagineRNN
Action AnticipationEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Verb79.72ImagineRNN
Action AnticipationEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Act.9.25ImagineRNN
Action AnticipationEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Noun15.5ImagineRNN
Action AnticipationEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Verb29.33ImagineRNN
Action AnticipationEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Act.22.19ImagineRNN
Action AnticipationEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Noun35.78ImagineRNN
Action AnticipationEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Verb70.67ImagineRNN
2D Human Pose EstimationEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Act.14.66ImagineRNN
2D Human Pose EstimationEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Noun22.79ImagineRNN
2D Human Pose EstimationEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Verb35.44ImagineRNN
2D Human Pose EstimationEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Act.34.98ImagineRNN
2D Human Pose EstimationEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Noun52.09ImagineRNN
2D Human Pose EstimationEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Verb79.72ImagineRNN
2D Human Pose EstimationEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Act.9.25ImagineRNN
2D Human Pose EstimationEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Noun15.5ImagineRNN
2D Human Pose EstimationEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Verb29.33ImagineRNN
2D Human Pose EstimationEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Act.22.19ImagineRNN
2D Human Pose EstimationEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Noun35.78ImagineRNN
2D Human Pose EstimationEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Verb70.67ImagineRNN
Action Recognition In VideosEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Act.14.66ImagineRNN
Action Recognition In VideosEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Noun22.79ImagineRNN
Action Recognition In VideosEPIC-KITCHENS-55 (Seen test set (S1))Top 1 Accuracy - Verb35.44ImagineRNN
Action Recognition In VideosEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Act.34.98ImagineRNN
Action Recognition In VideosEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Noun52.09ImagineRNN
Action Recognition In VideosEPIC-KITCHENS-55 (Seen test set (S1))Top 5 Accuracy - Verb79.72ImagineRNN
Action Recognition In VideosEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Act.9.25ImagineRNN
Action Recognition In VideosEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Noun15.5ImagineRNN
Action Recognition In VideosEPIC-KITCHENS-55 (Unseen test set (S2)Top 1 Accuracy - Verb29.33ImagineRNN
Action Recognition In VideosEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Act.22.19ImagineRNN
Action Recognition In VideosEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Noun35.78ImagineRNN
Action Recognition In VideosEPIC-KITCHENS-55 (Unseen test set (S2)Top 5 Accuracy - Verb70.67ImagineRNN

Related Papers

GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17