TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Revisiting Classifier: Transferring Vision-Language Models...

Revisiting Classifier: Transferring Vision-Language Models for Video Recognition

Wenhao Wu, Zhun Sun, Wanli Ouyang

2022-07-04Action ClassificationVideo RecognitionZero-Shot Action RecognitionVideo ClassificationAction RecognitionClassificationLanguage Modelling
PaperPDFCodeCode(official)CodeCodeCode

Abstract

Transferring knowledge from task-agnostic pre-trained deep models for downstream tasks is an important topic in computer vision research. Along with the growth of computational capacity, we now have open-source vision-language pre-trained models in large scales of the model architecture and amount of data. In this study, we focus on transferring knowledge for video classification tasks. Conventional methods randomly initialize the linear classifier head for vision classification, but they leave the usage of the text encoder for downstream visual recognition tasks undiscovered. In this paper, we revise the role of the linear classifier and replace the classifier with the different knowledge from pre-trained model. We utilize the well-pretrained language model to generate good semantic target for efficient transferring learning. The empirical study shows that our method improves both the performance and the training speed of video classification, with a negligible change in the model. Our simple yet effective tuning paradigm achieves state-of-the-art performance and efficient training on various video recognition scenarios, i.e., zero-shot, few-shot, general recognition. In particular, our paradigm achieves the state-of-the-art accuracy of 87.8% on Kinetics-400, and also surpasses previous methods by 20~50% absolute top-1 accuracy under zero-shot, few-shot settings on five popular video datasets. Code and models can be found at https://github.com/whwu95/Text4Vis .

Results

TaskDatasetMetricValueModel
VideoKinetics-400Acc@187.8Text4Vis (CLIP ViT-L/14)
VideoKinetics-400Acc@597.6Text4Vis (CLIP ViT-L/14)
Activity RecognitionActivityNetmAP96.9Text4Vis (w/ ViT-L)
Activity RecognitionUCF1013-fold Accuracy98.2Text4Vis
Action RecognitionActivityNetmAP96.9Text4Vis (w/ ViT-L)
Action RecognitionUCF1013-fold Accuracy98.2Text4Vis
Zero-Shot Action RecognitionUCF101Top-1 Accuracy85.8Text4Vis
Zero-Shot Action RecognitionKineticsTop-1 Accuracy68.9Text4Vis
Zero-Shot Action RecognitionKineticsTop-5 Accuracy90.3Text4Vis
Zero-Shot Action RecognitionHMDB51Top-1 Accuracy58.4Text4Vis
Zero-Shot Action RecognitionActivityNetTop-1 Accuracy84.6Text4Vis

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16