TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/STM: SpatioTemporal and Motion Encoding for Action Recogni...

STM: SpatioTemporal and Motion Encoding for Action Recognition

Boyuan Jiang, Mengmeng Wang, Weihao Gan, Wei Wu, Junjie Yan

2019-08-07ICCV 2019 10Action ClassificationAction RecognitionAction Recognition In VideosTemporal Action Localization
PaperPDF

Abstract

Spatiotemporal and motion features are two complementary and crucial information for video action recognition. Recent state-of-the-art methods adopt a 3D CNN stream to learn spatiotemporal features and another flow stream to learn motion features. In this work, we aim to efficiently encode these two features in a unified 2D framework. To this end, we first propose an STM block, which contains a Channel-wise SpatioTemporal Module (CSTM) to present the spatiotemporal features and a Channel-wise Motion Module (CMM) to efficiently encode motion features. We then replace original residual blocks in the ResNet architecture with STM blcoks to form a simple yet effective STM network by introducing very limited extra computation cost. Extensive experiments demonstrate that the proposed STM network outperforms the state-of-the-art methods on both temporal-related datasets (i.e., Something-Something v1 & v2 and Jester) and scene-related datasets (i.e., Kinetics-400, UCF-101, and HMDB-51) with the help of encoding spatiotemporal and motion features together.

Results

TaskDatasetMetricValueModel
VideoKinetics-400Acc@173.7STM (ResNet-50)
Activity RecognitionJester (Gesture Recognition)Val96.7STM (Resnet-50, 16 frames)
Activity RecognitionSomething-Something V1Top 1 Accuracy50.7STM (16 frames, ImageNet pretraining)
Activity RecognitionSomething-Something V2Top-1 Accuracy64.2STM (16 frames, ImageNet pretraining)
Activity RecognitionSomething-Something V2Top-5 Accuracy89.8STM (16 frames, ImageNet pretraining)
Activity RecognitionUCF1013-fold Accuracy96.2STM (ImageNet+Kinetics pretrain)
Activity RecognitionHMDB-51Average accuracy of 3 splits72.2STM (ImageNet+Kinetics pretrain)
Action RecognitionJester (Gesture Recognition)Val96.7STM (Resnet-50, 16 frames)
Action RecognitionSomething-Something V1Top 1 Accuracy50.7STM (16 frames, ImageNet pretraining)
Action RecognitionSomething-Something V2Top-1 Accuracy64.2STM (16 frames, ImageNet pretraining)
Action RecognitionSomething-Something V2Top-5 Accuracy89.8STM (16 frames, ImageNet pretraining)
Action RecognitionUCF1013-fold Accuracy96.2STM (ImageNet+Kinetics pretrain)
Action RecognitionHMDB-51Average accuracy of 3 splits72.2STM (ImageNet+Kinetics pretrain)
Action Recognition In VideosJester (Gesture Recognition)Val96.7STM (Resnet-50, 16 frames)
Action Recognition In VideosSomething-Something V1Top 1 Accuracy50.7STM (16 frames, ImageNet pretraining)
Action Recognition In VideosSomething-Something V2Top-1 Accuracy64.2STM (16 frames, ImageNet pretraining)
Action Recognition In VideosSomething-Something V2Top-5 Accuracy89.8STM (16 frames, ImageNet pretraining)
Action Recognition In VideosUCF1013-fold Accuracy96.2STM (ImageNet+Kinetics pretrain)
Action Recognition In VideosHMDB-51Average accuracy of 3 splits72.2STM (ImageNet+Kinetics pretrain)

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22