TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Temporally Coherent Embeddings for Self-Supervised Video R...

Temporally Coherent Embeddings for Self-Supervised Video Representation Learning

Joshua Knights, Ben Harwood, Daniel Ward, Anthony Vanderkop, Olivia Mackenzie-Ross, Peyman Moghadam

2020-03-21Representation LearningMetric LearningSelf-Supervised LearningAction RecognitionTemporal Action LocalizationSelf-Supervised Action Recognition
PaperPDFCode(official)

Abstract

This paper presents TCE: Temporally Coherent Embeddings for self-supervised video representation learning. The proposed method exploits inherent structure of unlabeled video data to explicitly enforce temporal coherency in the embedding space, rather than indirectly learning it through ranking or predictive proxy tasks. In the same way that high-level visual information in the world changes smoothly, we believe that nearby frames in learned representations will benefit from demonstrating similar properties. Using this assumption, we train our TCE model to encode videos such that adjacent frames exist close to each other and videos are separated from one another. Using TCE we learn robust representations from large quantities of unlabeled video data. We thoroughly analyse and evaluate our self-supervised learned TCE models on a downstream task of video action recognition using multiple challenging benchmarks (Kinetics400, UCF101, HMDB51). With a simple but effective 2D-CNN backbone and only RGB stream inputs, TCE pre-trained representations outperform all previous selfsupervised 2D-CNN and 3D-CNN pre-trained on UCF101. The code and pre-trained models for this paper can be downloaded at: https://github.com/csiro-robotics/TCE

Results

TaskDatasetMetricValueModel
Activity RecognitionUCF1013-fold Accuracy71.2TCE (ResNet-50)
Activity RecognitionUCF1013-fold Accuracy68.8TCE (ResNet-18, Split 1)
Activity RecognitionUCF1013-fold Accuracy68.2TCE (ResNet18, Split 1)
Activity RecognitionHMDB51Top-1 Accuracy36.6TCE (ResNet-50)
Activity RecognitionHMDB51Top-1 Accuracy34.2TCE (ResNet-18)
Action RecognitionUCF1013-fold Accuracy71.2TCE (ResNet-50)
Action RecognitionUCF1013-fold Accuracy68.8TCE (ResNet-18, Split 1)
Action RecognitionUCF1013-fold Accuracy68.2TCE (ResNet18, Split 1)
Action RecognitionHMDB51Top-1 Accuracy36.6TCE (ResNet-50)
Action RecognitionHMDB51Top-1 Accuracy34.2TCE (ResNet-18)

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Unsupervised Ground Metric Learning2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16