TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/What Can Simple Arithmetic Operations Do for Temporal Mode...

What Can Simple Arithmetic Operations Do for Temporal Modeling?

Wenhao Wu, Yuxin Song, Zhun Sun, Jingdong Wang, Chang Xu, Wanli Ouyang

2023-07-18ICCV 2023 1Action ClassificationVideo RecognitionAction Recognition
PaperPDFCode(official)Code

Abstract

Temporal modeling plays a crucial role in understanding video content. To tackle this problem, previous studies built complicated temporal relations through time sequence thanks to the development of computationally powerful devices. In this work, we explore the potential of four simple arithmetic operations for temporal modeling. Specifically, we first capture auxiliary temporal cues by computing addition, subtraction, multiplication, and division between pairs of extracted frame features. Then, we extract corresponding features from these cues to benefit the original temporal-irrespective domain. We term such a simple pipeline as an Arithmetic Temporal Module (ATM), which operates on the stem of a visual backbone with a plug-and-play style. We conduct comprehensive ablation studies on the instantiation of ATMs and demonstrate that this module provides powerful temporal modeling capability at a low computational cost. Moreover, the ATM is compatible with both CNNs- and ViTs-based architectures. Our results show that ATM achieves superior performance over several popular video benchmarks. Specifically, on Something-Something V1, V2 and Kinetics-400, we reach top-1 accuracy of 65.6%, 74.6%, and 89.4% respectively. The code is available at https://github.com/whwu95/ATM.

Results

TaskDatasetMetricValueModel
VideoKinetics-400Acc@189.4ATM
VideoKinetics-400Acc@598.3ATM
Activity RecognitionSomething-Something V1Top 1 Accuracy65.6ATM
Activity RecognitionSomething-Something V1Top 5 Accuracy88.6ATM
Activity RecognitionSomething-Something V2Top-1 Accuracy74.6ATM
Activity RecognitionSomething-Something V2Top-5 Accuracy94.4ATM
Action RecognitionSomething-Something V1Top 1 Accuracy65.6ATM
Action RecognitionSomething-Something V1Top 5 Accuracy88.6ATM
Action RecognitionSomething-Something V2Top-1 Accuracy74.6ATM
Action RecognitionSomething-Something V2Top-5 Accuracy94.4ATM

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22