TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/TAda! Temporally-Adaptive Convolutions for Video Understan...

TAda! Temporally-Adaptive Convolutions for Video Understanding

Ziyuan Huang, Shiwei Zhang, Liang Pan, Zhiwu Qing, Mingqian Tang, Ziwei Liu, Marcelo H. Ang Jr

2021-10-12ICLR 2022 4Action ClassificationVideo UnderstandingAction RecognitionTemporal Action Localization
PaperPDFCodeCode(official)

Abstract

Spatial convolutions are widely used in numerous deep video models. It fundamentally assumes spatio-temporal invariance, i.e., using shared weights for every location in different frames. This work presents Temporally-Adaptive Convolutions (TAdaConv) for video understanding, which shows that adaptive weight calibration along the temporal dimension is an efficient way to facilitate modelling complex temporal dynamics in videos. Specifically, TAdaConv empowers the spatial convolutions with temporal modelling abilities by calibrating the convolution weights for each frame according to its local and global temporal context. Compared to previous temporal modelling operations, TAdaConv is more efficient as it operates over the convolution kernels instead of the features, whose dimension is an order of magnitude smaller than the spatial resolutions. Further, the kernel calibration brings an increased model capacity. We construct TAda2D and TAdaConvNeXt networks by replacing the 2D convolutions in ResNet and ConvNeXt with TAdaConv, which leads to at least on par or better performance compared to state-of-the-art approaches on multiple video action recognition and localization benchmarks. We also demonstrate that as a readily plug-in operation with negligible computation overhead, TAdaConv can effectively improve many existing video models with a convincing margin.

Results

TaskDatasetMetricValueModel
VideoKinetics-400Acc@179.1TAdaConvNeXt-T
VideoKinetics-400Acc@593.7TAdaConvNeXt-T
VideoKinetics-400Acc@178.2TAda2D-En (ResNet-50, 8+16 frames)
VideoKinetics-400Acc@593.5TAda2D-En (ResNet-50, 8+16 frames)
VideoKinetics-400Acc@177.4TAda2D (ResNet-50, 16 frames)
VideoKinetics-400Acc@593.1TAda2D (ResNet-50, 16 frames)
VideoKinetics-400Acc@176.7TAda2D (ResNet-50, 8 frames)
VideoKinetics-400Acc@592.6TAda2D (ResNet-50, 8 frames)
Activity RecognitionSomething-Something V2Top-1 Accuracy67.2TAda2D-En (ResNet-50, 8+16 frames)
Activity RecognitionSomething-Something V2Top-5 Accuracy89.8TAda2D-En (ResNet-50, 8+16 frames)
Activity RecognitionSomething-Something V2Top-1 Accuracy67.1TAdaConvNeXt-T
Activity RecognitionSomething-Something V2Top-5 Accuracy90.4TAdaConvNeXt-T
Activity RecognitionSomething-Something V2Top-1 Accuracy65.6TAda2D (ResNet-50, 16 frames)
Activity RecognitionSomething-Something V2Top-5 Accuracy89.2TAda2D (ResNet-50, 16 frames)
Activity RecognitionSomething-Something V2Top-1 Accuracy64TAda2D (ResNet-50, 8 frames)
Activity RecognitionSomething-Something V2Top-5 Accuracy88TAda2D (ResNet-50, 8 frames)
Action RecognitionSomething-Something V2Top-1 Accuracy67.2TAda2D-En (ResNet-50, 8+16 frames)
Action RecognitionSomething-Something V2Top-5 Accuracy89.8TAda2D-En (ResNet-50, 8+16 frames)
Action RecognitionSomething-Something V2Top-1 Accuracy67.1TAdaConvNeXt-T
Action RecognitionSomething-Something V2Top-5 Accuracy90.4TAdaConvNeXt-T
Action RecognitionSomething-Something V2Top-1 Accuracy65.6TAda2D (ResNet-50, 16 frames)
Action RecognitionSomething-Something V2Top-5 Accuracy89.2TAda2D (ResNet-50, 16 frames)
Action RecognitionSomething-Something V2Top-1 Accuracy64TAda2D (ResNet-50, 8 frames)
Action RecognitionSomething-Something V2Top-5 Accuracy88TAda2D (ResNet-50, 8 frames)

Related Papers

VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16UGC-VideoCaptioner: An Omni UGC Video Detail Caption Model and New Benchmarks2025-07-15EmbRACE-3K: Embodied Reasoning and Action in Complex Environments2025-07-14Chat with AI: The Surprising Turn of Real-time Video Communication from Human to AI2025-07-14Beyond Appearance: Geometric Cues for Robust Video Instance Segmentation2025-07-08Omni-Video: Democratizing Unified Video Understanding and Generation2025-07-08