TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Broaden Your Views for Self-Supervised Video Learning

Broaden Your Views for Self-Supervised Video Learning

Adrià Recasens, Pauline Luc, Jean-Baptiste Alayrac, Luyu Wang, Ross Hemsley, Florian Strub, Corentin Tallec, Mateusz Malinowski, Viorica Patraucean, Florent Altché, Michal Valko, Jean-bastien Grill, Aäron van den Oord, Andrew Zisserman

2021-03-30ICCV 2021 10Representation LearningAudio ClassificationOptical Flow EstimationSelf-Supervised LearningSelf-Supervised Audio ClassificationSelf-Supervised Action Recognition
PaperPDFCode(official)

Abstract

Most successful self-supervised learning methods are trained to align the representations of two independent views from the data. State-of-the-art methods in video are inspired by image techniques, where these two views are similarly extracted by cropping and augmenting the resulting crop. However, these methods miss a crucial element in the video domain: time. We introduce BraVe, a self-supervised learning framework for video. In BraVe, one of the views has access to a narrow temporal window of the video while the other view has a broad access to the video content. Our models learn to generalise from the narrow view to the general content of the video. Furthermore, BraVe processes the views with different backbones, enabling the use of alternative augmentations or modalities into the broad view such as optical flow, randomly convolved RGB frames, audio or their combinations. We demonstrate that BraVe achieves state-of-the-art results in self-supervised representation learning on standard video and audio classification benchmarks including UCF101, HMDB51, Kinetics, ESC-50 and AudioSet.

Results

TaskDatasetMetricValueModel
Activity RecognitionUCF101 (finetuned)3-fold Accuracy95.7BraVe:V-FA (TSM-50x2)
Activity RecognitionUCF1013-fold Accuracy93.1BraVe:V-FA (TSM-50x2)
Activity RecognitionKinetics-600Top-1 Accuracy71.4BraVe:V-FA (TSM-50x2)
Activity RecognitionHMDB51Top-1 Accuracy70.5BraVe:V-FA (TSM-50x2)
Activity RecognitionHMDB51 (finetuned)Top-1 Accuracy77.8BraVe:V-FA (TSM-50x2)
Action RecognitionUCF101 (finetuned)3-fold Accuracy95.7BraVe:V-FA (TSM-50x2)
Action RecognitionUCF1013-fold Accuracy93.1BraVe:V-FA (TSM-50x2)
Action RecognitionKinetics-600Top-1 Accuracy71.4BraVe:V-FA (TSM-50x2)
Action RecognitionHMDB51Top-1 Accuracy70.5BraVe:V-FA (TSM-50x2)
Action RecognitionHMDB51 (finetuned)Top-1 Accuracy77.8BraVe:V-FA (TSM-50x2)

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16