TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ST-Adapter: Parameter-Efficient Image-to-Video Transfer Le...

ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning

Junting Pan, Ziyi Lin, Xiatian Zhu, Jing Shao, Hongsheng Li

2022-06-27Action ClassificationTransfer Learningparameter-efficient fine-tuningVideo UnderstandingAction RecognitionTemporal Action Localization
PaperPDFCode(official)

Abstract

Capitalizing on large pre-trained models for various downstream tasks of interest have recently emerged with promising performance. Due to the ever-growing model size, the standard full fine-tuning based task adaptation strategy becomes prohibitively costly in terms of model training and storage. This has led to a new research direction in parameter-efficient transfer learning. However, existing attempts typically focus on downstream tasks from the same modality (e.g., image understanding) of the pre-trained model. This creates a limit because in some specific modalities, (e.g., video understanding) such a strong pre-trained model with sufficient knowledge is less or not available. In this work, we investigate such a novel cross-modality transfer learning setting, namely parameter-efficient image-to-video transfer learning. To solve this problem, we propose a new Spatio-Temporal Adapter (ST-Adapter) for parameter-efficient fine-tuning per video task. With a built-in spatio-temporal reasoning capability in a compact design, ST-Adapter enables a pre-trained image model without temporal knowledge to reason about dynamic video content at a small (~8%) per-task parameter cost, requiring approximately 20 times fewer updated parameters compared to previous work. Extensive experiments on video action recognition tasks show that our ST-Adapter can match or even outperform the strong full fine-tuning strategy and state-of-the-art video models, whilst enjoying the advantage of parameter efficiency. The code and model are available at https://github.com/linziyi96/st-adapter

Results

TaskDatasetMetricValueModel
VideoKinetics-400Acc@187.2ST-Adapter (ViT-L, CLIP)
VideoKinetics-400Acc@597.6ST-Adapter (ViT-L, CLIP)
Activity RecognitionSomething-Something V2GFLOPs8248ST-Adapter (ViT-L, CLIP)
Activity RecognitionSomething-Something V2Top-1 Accuracy72.3ST-Adapter (ViT-L, CLIP)
Activity RecognitionSomething-Something V2Top-5 Accuracy93.9ST-Adapter (ViT-L, CLIP)
Action RecognitionSomething-Something V2GFLOPs8248ST-Adapter (ViT-L, CLIP)
Action RecognitionSomething-Something V2Top-1 Accuracy72.3ST-Adapter (ViT-L, CLIP)
Action RecognitionSomething-Something V2Top-5 Accuracy93.9ST-Adapter (ViT-L, CLIP)

Related Papers

RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Best Practices for Large-Scale, Pixel-Wise Crop Mapping and Transfer Learning Workflows2025-07-16DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Robust-Multi-Task Gradient Boosting2025-07-15