TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Implicit Temporal Modeling with Learnable Alignment for Vi...

Implicit Temporal Modeling with Learnable Alignment for Video Recognition

Shuyuan Tu, Qi Dai, Zuxuan Wu, Zhi-Qi Cheng, Han Hu, Yu-Gang Jiang

2023-04-20ICCV 2023 1Action ClassificationVideo RecognitionAction Recognition
PaperPDFCode(official)

Abstract

Contrastive language-image pretraining (CLIP) has demonstrated remarkable success in various image tasks. However, how to extend CLIP with effective temporal modeling is still an open and crucial problem. Existing factorized or joint spatial-temporal modeling trades off between the efficiency and performance. While modeling temporal information within straight through tube is widely adopted in literature, we find that simple frame alignment already provides enough essence without temporal attention. To this end, in this paper, we proposed a novel Implicit Learnable Alignment (ILA) method, which minimizes the temporal modeling effort while achieving incredibly high performance. Specifically, for a frame pair, an interactive point is predicted in each frame, serving as a mutual information rich region. By enhancing the features around the interactive point, two frames are implicitly aligned. The aligned features are then pooled into a single token, which is leveraged in the subsequent spatial self-attention. Our method allows eliminating the costly or insufficient temporal self-attention in video. Extensive experiments on benchmarks demonstrate the superiority and generality of our module. Particularly, the proposed ILA achieves a top-1 accuracy of 88.7% on Kinetics-400 with much fewer FLOPs compared with Swin-L and ViViT-H. Code is released at https://github.com/Francis-Rings/ILA .

Results

TaskDatasetMetricValueModel
VideoKinetics-400Acc@188.7ILA (ViT-L/14)
VideoKinetics-400Acc@597.8ILA (ViT-L/14)
VideoKinetics-400Acc@185.7ILA (ViT-B/16)
VideoKinetics-400Acc@597.2ILA (ViT-B/16)
VideoKinetics-400Acc@182.4ILA (ViT-B/32)
VideoKinetics-400Acc@595.8ILA (ViT-B/32)
Activity RecognitionSomething-Something V2Top-1 Accuracy70.2ILA (ViT-L/14)
Activity RecognitionSomething-Something V2Top-5 Accuracy91.8ILA (ViT-L/14)
Activity RecognitionSomething-Something V2Top-1 Accuracy66.8ILA (ViT-B/16)
Activity RecognitionSomething-Something V2Top-5 Accuracy90.3ILA (ViT-B/16)
Action RecognitionSomething-Something V2Top-1 Accuracy70.2ILA (ViT-L/14)
Action RecognitionSomething-Something V2Top-5 Accuracy91.8ILA (ViT-L/14)
Action RecognitionSomething-Something V2Top-1 Accuracy66.8ILA (ViT-B/16)
Action RecognitionSomething-Something V2Top-5 Accuracy90.3ILA (ViT-B/16)

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22