TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/You Only Watch Once: A Unified CNN Architecture for Real-T...

You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization

Okan Köpüklü, Xiangyu Wei, Gerhard Rigoll

2019-11-15Action DetectionAction LocalizationAction Recognition In Videos
PaperPDFCodeCodeCode(official)CodeCode

Abstract

Spatiotemporal action localization requires the incorporation of two sources of information into the designed architecture: (1) temporal information from the previous frames and (2) spatial information from the key frame. Current state-of-the-art approaches usually extract these information with separate networks and use an extra mechanism for fusion to get detections. In this work, we present YOWO, a unified CNN architecture for real-time spatiotemporal action localization in video streams. YOWO is a single-stage architecture with two branches to extract temporal and spatial information concurrently and predict bounding boxes and action probabilities directly from video clips in one evaluation. Since the whole architecture is unified, it can be optimized end-to-end. The YOWO architecture is fast providing 34 frames-per-second on 16-frames input clips and 62 frames-per-second on 8-frames input clips, which is currently the fastest state-of-the-art architecture on spatiotemporal action localization task. Remarkably, YOWO outperforms the previous state-of-the art results on J-HMDB-21 and UCF101-24 with an impressive improvement of ~3% and ~12%, respectively. Moreover, YOWO is the first and only single-stage architecture that provides competitive results on AVA dataset. We make our code and pretrained models publicly available.

Results

TaskDatasetMetricValueModel
Activity RecognitionAVA v2.2mAP (Val)20.2YOWO+LFB*
Activity RecognitionAVA v2.1mAP (Val)19.2YOWO+LFB*
Action DetectionUCF101-24Frame-mAP 0.587.3YOWO + LFB
Action DetectionUCF101-24Video-mAP 0.186.1YOWO + LFB
Action DetectionUCF101-24Video-mAP 0.278.6YOWO + LFB
Action DetectionUCF101-24Video-mAP 0.553.1YOWO + LFB
Action DetectionUCF101-24Frame-mAP 0.580.4YOWO
Action DetectionUCF101-24Video-mAP 0.182.5YOWO
Action DetectionUCF101-24Video-mAP 0.275.8YOWO
Action DetectionUCF101-24Video-mAP 0.548.8YOWO
Action DetectionJ-HMDBFrame-mAP 0.575.7YOWO + LFB
Action DetectionJ-HMDBVideo-mAP 0.288.3YOWO + LFB
Action DetectionJ-HMDBVideo-mAP 0.585.9YOWO + LFB
Action DetectionJ-HMDBFrame-mAP 0.574.4YOWO
Action DetectionJ-HMDBVideo-mAP 0.287.8YOWO
Action DetectionJ-HMDBVideo-mAP 0.585.7YOWO
Action RecognitionAVA v2.2mAP (Val)20.2YOWO+LFB*
Action RecognitionAVA v2.1mAP (Val)19.2YOWO+LFB*
Action Recognition In VideosAVA v2.2mAP (Val)20.2YOWO+LFB*
Action Recognition In VideosAVA v2.1mAP (Val)19.2YOWO+LFB*

Related Papers

CBF-AFA: Chunk-Based Multi-SSL Fusion for Automatic Fluency Assessment2025-06-25MultiHuman-Testbench: Benchmarking Image Generation for Multiple Humans2025-06-25Distributed Activity Detection for Cell-Free Hybrid Near-Far Field Communications2025-06-17Zero-Shot Temporal Interaction Localization for Egocentric Videos2025-06-04Speaker Diarization with Overlapping Community Detection Using Graph Attention Networks and Label Propagation Algorithm2025-06-03Attention Is Not Always the Answer: Optimizing Voice Activity Detection with Simple Feature Fusion2025-06-02Joint Activity Detection and Channel Estimation for Massive Connectivity: Where Message Passing Meets Score-Based Generative Priors2025-05-31LLM-powered Query Expansion for Enhancing Boundary Prediction in Language-driven Action Localization2025-05-30