TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Weakly Supervised Action Selection Learning in Video

Weakly Supervised Action Selection Learning in Video

Junwei Ma, Satya Krishna Gorti, Maksims Volkovs, Guangwei Yu

2021-05-06CVPR 2021 1Weakly Supervised Action LocalizationTemporal Localization
PaperPDFCode(official)

Abstract

Localizing actions in video is a core task in computer vision. The weakly supervised temporal localization problem investigates whether this task can be adequately solved with only video-level labels, significantly reducing the amount of expensive and error-prone annotation that is required. A common approach is to train a frame-level classifier where frames with the highest class probability are selected to make a video-level prediction. Frame level activations are then used for localization. However, the absence of frame-level annotations cause the classifier to impart class bias on every frame. To address this, we propose the Action Selection Learning (ASL) approach to capture the general concept of action, a property we refer to as "actionness". Under ASL, the model is trained with a novel class-agnostic task to predict which frames will be selected by the classifier. Empirically, we show that ASL outperforms leading baselines on two popular benchmarks THUMOS-14 and ActivityNet-1.2, with 10.3% and 5.7% relative improvement respectively. We further analyze the properties of ASL and demonstrate the importance of actionness. Full code for this work is available here: https://github.com/layer6ai-labs/ASL.

Results

TaskDatasetMetricValueModel
VideoFineActionmAP3.3ASL
VideoFineActionmAP IOU@0.52.68ASL
VideoFineActionmAP IOU@0.750.81ASL
VideoFineActionmAP IOU@0.953.3ASL
VideoActivityNet-1.2Mean mAP25.8ASL
VideoActivityNet-1.2mAP@0.540.2ASL
Temporal Action LocalizationFineActionmAP3.3ASL
Temporal Action LocalizationFineActionmAP IOU@0.52.68ASL
Temporal Action LocalizationFineActionmAP IOU@0.750.81ASL
Temporal Action LocalizationFineActionmAP IOU@0.953.3ASL
Temporal Action LocalizationActivityNet-1.2Mean mAP25.8ASL
Temporal Action LocalizationActivityNet-1.2mAP@0.540.2ASL
Zero-Shot LearningFineActionmAP3.3ASL
Zero-Shot LearningFineActionmAP IOU@0.52.68ASL
Zero-Shot LearningFineActionmAP IOU@0.750.81ASL
Zero-Shot LearningFineActionmAP IOU@0.953.3ASL
Zero-Shot LearningActivityNet-1.2Mean mAP25.8ASL
Zero-Shot LearningActivityNet-1.2mAP@0.540.2ASL
Action LocalizationFineActionmAP3.3ASL
Action LocalizationFineActionmAP IOU@0.52.68ASL
Action LocalizationFineActionmAP IOU@0.750.81ASL
Action LocalizationFineActionmAP IOU@0.953.3ASL
Action LocalizationActivityNet-1.2Mean mAP25.8ASL
Action LocalizationActivityNet-1.2mAP@0.540.2ASL
Weakly Supervised Action LocalizationFineActionmAP3.3ASL
Weakly Supervised Action LocalizationFineActionmAP IOU@0.52.68ASL
Weakly Supervised Action LocalizationFineActionmAP IOU@0.750.81ASL
Weakly Supervised Action LocalizationFineActionmAP IOU@0.953.3ASL
Weakly Supervised Action LocalizationActivityNet-1.2Mean mAP25.8ASL
Weakly Supervised Action LocalizationActivityNet-1.2mAP@0.540.2ASL

Related Papers

Fine-Tuning Large Audio-Language Models with LoRA for Precise Temporal Localization of Prolonged Exposure Therapy Elements2025-06-11VideoMolmo: Spatio-Temporal Grounding Meets Pointing2025-06-05DisTime: Distribution-based Time Representation for Video Large Language Models2025-05-30Transforming faces into video stories -- VideoFace2.02025-05-04MINERVA: Evaluating Complex Video Reasoning2025-05-01TimeSoccer: An End-to-End Multimodal Large Language Model for Soccer Commentary Generation2025-04-24Hierarchical and Multimodal Data for Daily Activity Understanding2025-04-24A Large-Language Model Framework for Relative Timeline Extraction from PubMed Case Reports2025-04-15