TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SLIC: Self-Supervised Learning with Iterative Clustering f...

SLIC: Self-Supervised Learning with Iterative Clustering for Human Action Videos

Salar Hosseini Khorasgani, Yuxuan Chen, Florian Shkurti

2022-06-25CVPR 2022 1Video RetrievalAction ClassificationSelf-Supervised LearningClusteringContrastive LearningRetrievalSelf-supervised Video RetrievalSelf-Supervised Action Recognition
PaperPDFCode(official)

Abstract

Self-supervised methods have significantly closed the gap with end-to-end supervised learning for image classification. In the case of human action videos, however, where both appearance and motion are significant factors of variation, this gap remains significant. One of the key reasons for this is that sampling pairs of similar video clips, a required step for many self-supervised contrastive learning methods, is currently done conservatively to avoid false positives. A typical assumption is that similar clips only occur temporally close within a single video, leading to insufficient examples of motion similarity. To mitigate this, we propose SLIC, a clustering-based self-supervised contrastive learning method for human action videos. Our key contribution is that we improve upon the traditional intra-video positive sampling by using iterative clustering to group similar video instances. This enables our method to leverage pseudo-labels from the cluster assignments to sample harder positives and negatives. SLIC outperforms state-of-the-art video retrieval baselines by +15.4% on top-1 recall on UCF101 and by +5.7% when directly transferred to HMDB51. With end-to-end finetuning for action classification, SLIC achieves 83.2% top-1 accuracy (+0.8%) on UCF101 and 54.5% on HMDB51 (+1.6%). SLIC is also competitive with the state-of-the-art in action classification after self-supervised pretraining on Kinetics400.

Results

TaskDatasetMetricValueModel
Activity RecognitionUCF101split-1 Top-1 Accuracy83.2SLIC (R3D-18)
Activity RecognitionHMDB51Top-1 Accuracy54.5SLIC (R3D-18)
Action RecognitionUCF101split-1 Top-1 Accuracy83.2SLIC (R3D-18)
Action RecognitionHMDB51Top-1 Accuracy54.5SLIC (R3D-18)

Related Papers

Tri-Learn Graph Fusion Network for Attributed Graph Clustering2025-07-18A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17