TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ViViT: A Video Vision Transformer

ViViT: A Video Vision Transformer

Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid

2021-03-29ICCV 2021 10Image ClassificationAction ClassificationVideo ClassificationGeneral ClassificationAction RecognitionClassification
PaperPDFCodeCodeCodeCodeCode(official)CodeCodeCodeCodeCode

Abstract

We present pure-transformer based models for video classification, drawing upon the recent success of such models in image classification. Our model extracts spatio-temporal tokens from the input video, which are then encoded by a series of transformer layers. In order to handle the long sequences of tokens encountered in video, we propose several, efficient variants of our model which factorise the spatial- and temporal-dimensions of the input. Although transformer-based models are known to only be effective when large training datasets are available, we show how we can effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple video classification benchmarks including Kinetics 400 and 600, Epic Kitchens, Something-Something v2 and Moments in Time, outperforming prior methods based on deep 3D convolutional networks. To facilitate further research, we release code at https://github.com/google-research/scenic/tree/main/scenic/projects/vivit

Results

TaskDatasetMetricValueModel
VideoMiTTop 5 Accuracy64.9ViViT-L/16x2
VideoKinetics-400Acc@184.9ViViT-H/16x2 (JFT)
VideoKinetics-400Acc@595.8ViViT-H/16x2 (JFT)
VideoKinetics-400Acc@594.7ViViT-L/16x2 320
VideoKinetics-600Top-1 Accuracy85.8ViViT-H/16x2 (JFT)
VideoKinetics-600Top-5 Accuracy96.5ViViT-H/16x2 (JFT)
VideoKinetics-600Top-1 Accuracy84.3ViViT-L/16x2
VideoKinetics-600Top-5 Accuracy95.6ViViT-L/16x2
VideoKinetics-600Top-1 Accuracy83ViViT-L/16x2 (320x320)
VideoKinetics-600Top-5 Accuracy95.7ViViT-L/16x2 (320x320)
Activity RecognitionEPIC-KITCHENS-100Action@144ViViT-L/16x2 Fact. encoder
Activity RecognitionEPIC-KITCHENS-100Noun@156.8ViViT-L/16x2 Fact. encoder
Activity RecognitionEPIC-KITCHENS-100Verb@166.4ViViT-L/16x2 Fact. encoder
Activity RecognitionSomething-Something V2Top-1 Accuracy65.4ViViT-L/16x2 Fact. encoder
Activity RecognitionSomething-Something V2Top-5 Accuracy89.8ViViT-L/16x2 Fact. encoder
Action RecognitionEPIC-KITCHENS-100Action@144ViViT-L/16x2 Fact. encoder
Action RecognitionEPIC-KITCHENS-100Noun@156.8ViViT-L/16x2 Fact. encoder
Action RecognitionEPIC-KITCHENS-100Verb@166.4ViViT-L/16x2 Fact. encoder
Action RecognitionSomething-Something V2Top-1 Accuracy65.4ViViT-L/16x2 Fact. encoder
Action RecognitionSomething-Something V2Top-5 Accuracy89.8ViViT-L/16x2 Fact. encoder

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Safeguarding Federated Learning-based Road Condition Classification2025-07-16