TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Masked Video Distillation: Rethinking Masked Feature Model...

Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning

Rui Wang, Dongdong Chen, Zuxuan Wu, Yinpeng Chen, Xiyang Dai, Mengchen Liu, Lu Yuan, Yu-Gang Jiang

2022-12-08CVPR 2023 1Action ClassificationRepresentation LearningAction RecognitionSelf-Supervised Action Recognition
PaperPDFCodeCodeCode(official)Code

Abstract

Benefiting from masked visual modeling, self-supervised video representation learning has achieved remarkable progress. However, existing methods focus on learning representations from scratch through reconstructing low-level features like raw pixel RGB values. In this paper, we propose masked video distillation (MVD), a simple yet effective two-stage masked feature modeling framework for video representation learning: firstly we pretrain an image (or video) model by recovering low-level features of masked patches, then we use the resulting features as targets for masked feature modeling. For the choice of teacher models, we observe that students taught by video teachers perform better on temporally-heavy video tasks, while image teachers transfer stronger spatial representations for spatially-heavy video tasks. Visualization analysis also indicates different teachers produce different learned patterns for students. Motivated by this observation, we design a spatial-temporal co-teaching method for MVD. Specifically, we distill student models from both video teachers and image teachers by masked feature modeling. Extensive experimental results demonstrate that video transformers pretrained with spatial-temporal co-teaching outperform models distilled with a single teacher on a multitude of video datasets. Our MVD with vanilla ViT achieves state-of-the-art performance compared with previous supervised or self-supervised methods on several challenging video downstream tasks. For example, with the ViT-Large model, our MVD achieves 86.4% and 76.7% Top-1 accuracy on Kinetics-400 and Something-Something-v2, outperforming VideoMAE by 1.2% and 2.4% respectively. When a larger ViT-Huge model is adopted, MVD achieves the state-of-the-art performance with 77.3% Top-1 accuracy on Something-Something-v2 and 41.1 mAP on AVA v2.2. Code will be available at \url{https://github.com/ruiwang2021/mvd}.

Results

TaskDatasetMetricValueModel
VideoKinetics-400Acc@187.2MVD (K400 pretrain, ViT-H, 16x224x224)
VideoKinetics-400Acc@597.4MVD (K400 pretrain, ViT-H, 16x224x224)
VideoKinetics-400Acc@186.4MVD (K400 pretrain, ViT-L, 16x224x224)
VideoKinetics-400Acc@597MVD (K400 pretrain, ViT-L, 16x224x224)
VideoKinetics-400Acc@183.4MVD (K400 pretrain, ViT-B, 16x224x224)
VideoKinetics-400Acc@595.8MVD (K400 pretrain, ViT-B, 16x224x224)
VideoKinetics-400Acc@181MVD (K400 pretrain, ViT-S, 16x224x224)
VideoKinetics-400Acc@594.8MVD (K400 pretrain, ViT-S, 16x224x224)
Activity RecognitionSomething-Something V2Parameters633MVD (Kinetics400 pretrain, ViT-H, 16 frame)
Activity RecognitionSomething-Something V2Top-1 Accuracy77.3MVD (Kinetics400 pretrain, ViT-H, 16 frame)
Activity RecognitionSomething-Something V2Top-5 Accuracy95.7MVD (Kinetics400 pretrain, ViT-H, 16 frame)
Activity RecognitionSomething-Something V2Parameters305MVD (Kinetics400 pretrain, ViT-L, 16 frame)
Activity RecognitionSomething-Something V2Top-1 Accuracy76.7MVD (Kinetics400 pretrain, ViT-L, 16 frame)
Activity RecognitionSomething-Something V2Top-5 Accuracy95.5MVD (Kinetics400 pretrain, ViT-L, 16 frame)
Activity RecognitionSomething-Something V2Parameters87MVD (Kinetics400 pretrain, ViT-B, 16 frame)
Activity RecognitionSomething-Something V2Top-1 Accuracy73.7MVD (Kinetics400 pretrain, ViT-B, 16 frame)
Activity RecognitionSomething-Something V2Top-5 Accuracy94MVD (Kinetics400 pretrain, ViT-B, 16 frame)
Activity RecognitionSomething-Something V2Parameters22MVD (Kinetics400 pretrain, ViT-S, 16 frame)
Activity RecognitionSomething-Something V2Top-1 Accuracy70.9MVD (Kinetics400 pretrain, ViT-S, 16 frame)
Activity RecognitionSomething-Something V2Top-5 Accuracy92.8MVD (Kinetics400 pretrain, ViT-S, 16 frame)
Activity RecognitionAVA v2.2mAP41.1MVD (Kinetics400 pretrain+finetune, ViT-H, 16x4)
Activity RecognitionAVA v2.2mAP40.1MVD (Kinetics400 pretrain, ViT-H, 16x4)
Activity RecognitionAVA v2.2mAP38.7MVD (Kinetics400 pretrain+finetune, ViT-L, 16x4)
Activity RecognitionAVA v2.2mAP37.7MVD (Kinetics400 pretrain, ViT-L, 16x4)
Activity RecognitionAVA v2.2mAP34.2MVD (Kinetics400 pretrain+finetune, ViT-B, 16x4)
Activity RecognitionAVA v2.2mAP31.1MVD (Kinetics400 pretrain, ViT-B, 16x4)
Activity RecognitionUCF1013-fold Accuracy97.5MVD (ViT-B)
Activity RecognitionHMDB51Top-1 Accuracy79.7MVD (ViT-B)
Action RecognitionSomething-Something V2Parameters633MVD (Kinetics400 pretrain, ViT-H, 16 frame)
Action RecognitionSomething-Something V2Top-1 Accuracy77.3MVD (Kinetics400 pretrain, ViT-H, 16 frame)
Action RecognitionSomething-Something V2Top-5 Accuracy95.7MVD (Kinetics400 pretrain, ViT-H, 16 frame)
Action RecognitionSomething-Something V2Parameters305MVD (Kinetics400 pretrain, ViT-L, 16 frame)
Action RecognitionSomething-Something V2Top-1 Accuracy76.7MVD (Kinetics400 pretrain, ViT-L, 16 frame)
Action RecognitionSomething-Something V2Top-5 Accuracy95.5MVD (Kinetics400 pretrain, ViT-L, 16 frame)
Action RecognitionSomething-Something V2Parameters87MVD (Kinetics400 pretrain, ViT-B, 16 frame)
Action RecognitionSomething-Something V2Top-1 Accuracy73.7MVD (Kinetics400 pretrain, ViT-B, 16 frame)
Action RecognitionSomething-Something V2Top-5 Accuracy94MVD (Kinetics400 pretrain, ViT-B, 16 frame)
Action RecognitionSomething-Something V2Parameters22MVD (Kinetics400 pretrain, ViT-S, 16 frame)
Action RecognitionSomething-Something V2Top-1 Accuracy70.9MVD (Kinetics400 pretrain, ViT-S, 16 frame)
Action RecognitionSomething-Something V2Top-5 Accuracy92.8MVD (Kinetics400 pretrain, ViT-S, 16 frame)
Action RecognitionAVA v2.2mAP41.1MVD (Kinetics400 pretrain+finetune, ViT-H, 16x4)
Action RecognitionAVA v2.2mAP40.1MVD (Kinetics400 pretrain, ViT-H, 16x4)
Action RecognitionAVA v2.2mAP38.7MVD (Kinetics400 pretrain+finetune, ViT-L, 16x4)
Action RecognitionAVA v2.2mAP37.7MVD (Kinetics400 pretrain, ViT-L, 16x4)
Action RecognitionAVA v2.2mAP34.2MVD (Kinetics400 pretrain+finetune, ViT-B, 16x4)
Action RecognitionAVA v2.2mAP31.1MVD (Kinetics400 pretrain, ViT-B, 16x4)
Action RecognitionUCF1013-fold Accuracy97.5MVD (ViT-B)
Action RecognitionHMDB51Top-1 Accuracy79.7MVD (ViT-B)

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16A Mixed-Primitive-based Gaussian Splatting Method for Surface Reconstruction2025-07-15