TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Video Swin Transformer

Video Swin Transformer

Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, Han Hu

2021-06-24CVPR 2022 1Action ClassificationVideo RecognitionVideo ClassificationGeneral ClassificationVideo UnderstandingAction Recognition
PaperPDFCodeCodeCode(official)CodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

The vision community is witnessing a modeling shift from CNNs to Transformers, where pure Transformer architectures have attained top accuracy on the major video recognition benchmarks. These video models are all built on Transformer layers that globally connect patches across the spatial and temporal dimensions. In this paper, we instead advocate an inductive bias of locality in video Transformers, which leads to a better speed-accuracy trade-off compared to previous approaches which compute self-attention globally even with spatial-temporal factorization. The locality of the proposed video architecture is realized by adapting the Swin Transformer designed for the image domain, while continuing to leverage the power of pre-trained image models. Our approach achieves state-of-the-art accuracy on a broad range of video recognition benchmarks, including on action recognition (84.9 top-1 accuracy on Kinetics-400 and 86.1 top-1 accuracy on Kinetics-600 with ~20x less pre-training data and ~3x smaller model size) and temporal modeling (69.6 top-1 accuracy on Something-Something v2). The code and models will be made publicly available at https://github.com/SwinTransformer/Video-Swin-Transformer.

Results

TaskDatasetMetricValueModel
VideoKinetics-400Acc@184.9Swin-L (384x384, ImageNet-21k pretrain)
VideoKinetics-400Acc@596.7Swin-L (384x384, ImageNet-21k pretrain)
VideoKinetics-400Acc@183.1Swin-L (ImageNet-21k pretrain)
VideoKinetics-400Acc@595.9Swin-L (ImageNet-21k pretrain)
VideoKinetics-400Acc@182.7Swin-B (ImageNet-21k pretrain)
VideoKinetics-400Acc@595.5Swin-B (ImageNet-21k pretrain)
VideoKinetics-400Acc@180.6Swin-B (ImageNet-1k pretrain)
VideoKinetics-400Acc@594.6Swin-B (ImageNet-1k pretrain)
VideoKinetics-400Acc@180.6Swin-S (ImageNet-1k pretrain)
VideoKinetics-400Acc@594.5Swin-S (ImageNet-1k pretrain)
VideoKinetics-400Acc@178.8Swin-T (ImageNet-1k pretrain)
VideoKinetics-400Acc@593.6Swin-T (ImageNet-1k pretrain)
VideoKinetics-600Top-1 Accuracy86.1Swin-L (384x384, ImageNet-21k pretrain)
VideoKinetics-600Top-5 Accuracy97.3Swin-L (384x384, ImageNet-21k pretrain)
VideoKinetics-600Top-1 Accuracy84Swin-B (ImageNet-21k pretrain)
VideoKinetics-600Top-5 Accuracy96.5Swin-B (ImageNet-21k pretrain)
Activity RecognitionSomething-Something V2Parameters89Swin-B (IN-21K + Kinetics400 pretrain)
Activity RecognitionSomething-Something V2Top-1 Accuracy69.6Swin-B (IN-21K + Kinetics400 pretrain)
Activity RecognitionSomething-Something V2Top-5 Accuracy92.7Swin-B (IN-21K + Kinetics400 pretrain)
Action RecognitionSomething-Something V2Parameters89Swin-B (IN-21K + Kinetics400 pretrain)
Action RecognitionSomething-Something V2Top-1 Accuracy69.6Swin-B (IN-21K + Kinetics400 pretrain)
Action RecognitionSomething-Something V2Top-5 Accuracy92.7Swin-B (IN-21K + Kinetics400 pretrain)

Related Papers

VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16UGC-VideoCaptioner: An Omni UGC Video Detail Caption Model and New Benchmarks2025-07-15EmbRACE-3K: Embodied Reasoning and Action in Complex Environments2025-07-14Chat with AI: The Surprising Turn of Real-time Video Communication from Human to AI2025-07-14Beyond Appearance: Geometric Cues for Robust Video Instance Segmentation2025-07-08Omni-Video: Democratizing Unified Video Understanding and Generation2025-07-08