TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/End-to-end Learning of Action Detection from Frame Glimpse...

End-to-end Learning of Action Detection from Frame Glimpses in Videos

Serena Yeung, Olga Russakovsky, Greg Mori, Li Fei-Fei

2015-11-22CVPR 2016 6Action DetectionTemporal Action Localization
PaperPDFCode

Abstract

In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2% or less) of the video frames.

Results

TaskDatasetMetricValueModel
VideoTHUMOS’14mAP IOU@0.148.9Yeung et al.
VideoTHUMOS’14mAP IOU@0.244Yeung et al.
VideoTHUMOS’14mAP IOU@0.336Yeung et al.
VideoTHUMOS’14mAP IOU@0.426.4Yeung et al.
VideoTHUMOS’14mAP IOU@0.517.1Yeung et al.
Temporal Action LocalizationTHUMOS’14mAP IOU@0.148.9Yeung et al.
Temporal Action LocalizationTHUMOS’14mAP IOU@0.244Yeung et al.
Temporal Action LocalizationTHUMOS’14mAP IOU@0.336Yeung et al.
Temporal Action LocalizationTHUMOS’14mAP IOU@0.426.4Yeung et al.
Temporal Action LocalizationTHUMOS’14mAP IOU@0.517.1Yeung et al.
Zero-Shot LearningTHUMOS’14mAP IOU@0.148.9Yeung et al.
Zero-Shot LearningTHUMOS’14mAP IOU@0.244Yeung et al.
Zero-Shot LearningTHUMOS’14mAP IOU@0.336Yeung et al.
Zero-Shot LearningTHUMOS’14mAP IOU@0.426.4Yeung et al.
Zero-Shot LearningTHUMOS’14mAP IOU@0.517.1Yeung et al.
Activity RecognitionTHUMOS’14mAP@0.148.9Yeung et. al.
Activity RecognitionTHUMOS’14mAP@0.244Yeung et. al.
Activity RecognitionTHUMOS’14mAP@0.336Yeung et. al.
Activity RecognitionTHUMOS’14mAP@0.426.4Yeung et. al.
Activity RecognitionTHUMOS’14mAP@0.517.1Yeung et. al.
Action LocalizationTHUMOS’14mAP IOU@0.148.9Yeung et al.
Action LocalizationTHUMOS’14mAP IOU@0.244Yeung et al.
Action LocalizationTHUMOS’14mAP IOU@0.336Yeung et al.
Action LocalizationTHUMOS’14mAP IOU@0.426.4Yeung et al.
Action LocalizationTHUMOS’14mAP IOU@0.517.1Yeung et al.
Action RecognitionTHUMOS’14mAP@0.148.9Yeung et. al.
Action RecognitionTHUMOS’14mAP@0.244Yeung et. al.
Action RecognitionTHUMOS’14mAP@0.336Yeung et. al.
Action RecognitionTHUMOS’14mAP@0.426.4Yeung et. al.
Action RecognitionTHUMOS’14mAP@0.517.1Yeung et. al.

Related Papers

DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16CBF-AFA: Chunk-Based Multi-SSL Fusion for Automatic Fluency Assessment2025-06-25MultiHuman-Testbench: Benchmarking Image Generation for Multiple Humans2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Distributed Activity Detection for Cell-Free Hybrid Near-Far Field Communications2025-06-17Zero-Shot Temporal Interaction Localization for Egocentric Videos2025-06-04Speaker Diarization with Overlapping Community Detection Using Graph Attention Networks and Label Propagation Algorithm2025-06-03Attention Is Not Always the Answer: Optimizing Voice Activity Detection with Simple Feature Fusion2025-06-02