TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Orthogonal Temporal Interpolation for Zero-Shot Video Reco...

Orthogonal Temporal Interpolation for Zero-Shot Video Recognition

Yan Zhu, Junbao Zhuo, Bin Ma, Jiajia Geng, Xiaoming Wei, Xiaolin Wei, Shuhui Wang

2023-08-14Video RecognitionZero-Shot Action Recognition
PaperPDFCode(official)

Abstract

Zero-shot video recognition (ZSVR) is a task that aims to recognize video categories that have not been seen during the model training process. Recently, vision-language models (VLMs) pre-trained on large-scale image-text pairs have demonstrated impressive transferability for ZSVR. To make VLMs applicable to the video domain, existing methods often use an additional temporal learning module after the image-level encoder to learn the temporal relationships among video frames. Unfortunately, for video from unseen categories, we observe an abnormal phenomenon where the model that uses spatial-temporal feature performs much worse than the model that removes temporal learning module and uses only spatial feature. We conjecture that improper temporal modeling on video disrupts the spatial feature of the video. To verify our hypothesis, we propose Feature Factorization to retain the orthogonal temporal feature of the video and use interpolation to construct refined spatial-temporal feature. The model using appropriately refined spatial-temporal feature performs better than the one using only spatial feature, which verifies the effectiveness of the orthogonal temporal feature for the ZSVR task. Therefore, an Orthogonal Temporal Interpolation module is designed to learn a better refined spatial-temporal video feature during training. Additionally, a Matching Loss is introduced to improve the quality of the orthogonal temporal feature. We propose a model called OTI for ZSVR by employing orthogonal temporal interpolation and the matching loss based on VLMs. The ZSVR accuracies on popular video datasets (i.e., Kinetics-600, UCF101 and HMDB51) show that OTI outperforms the previous state-of-the-art method by a clear margin.

Results

TaskDatasetMetricValueModel
Zero-Shot Action RecognitionUCF101Top-1 Accuracy92.8OTI(ViT-L/14)
Zero-Shot Action RecognitionKineticsTop-1 Accuracy70.6OTI(ViT-L/14)
Zero-Shot Action RecognitionHMDB51Top-1 Accuracy64OTI(ViT-L/14)

Related Papers

DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16The Role of Video Generation in Enhancing Data-Limited Action Understanding2025-05-26VCRBench: Exploring Long-form Causal Reasoning Capabilities of Large Video Language Models2025-05-13Gameplay Highlights Generation2025-05-12Fast Adversarial Training with Weak-to-Strong Spatial-Temporal Consistency in the Frequency Domain on Videos2025-04-21CA^2ST: Cross-Attention in Audio, Space, and Time for Holistic Video Recognition2025-03-30Leveraging LLMs with Iterative Loop Structure for Enhanced Social Intelligence in Video Question Answering2025-03-27BASKET: A Large-Scale Video Dataset for Fine-Grained Skill Estimation2025-03-26