TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Contrastive Tuning: A Little Help to Make Masked Autoencod...

Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget

Johannes Lehner, Benedikt Alkin, Andreas Fürst, Elisabeth Rumetshofer, Lukas Miklautz, Sepp Hochreiter

2023-04-20Self-Supervised Image ClassificationImage ClusteringClusteringContrastive Learning
PaperPDFCode(official)

Abstract

Masked Image Modeling (MIM) methods, like Masked Autoencoders (MAE), efficiently learn a rich representation of the input. However, for adapting to downstream tasks, they require a sufficient amount of labeled data since their rich features code not only objects but also less relevant image background. In contrast, Instance Discrimination (ID) methods focus on objects. In this work, we study how to combine the efficiency and scalability of MIM with the ability of ID to perform downstream classification in the absence of large amounts of labeled data. To this end, we introduce Masked Autoencoder Contrastive Tuning (MAE-CT), a sequential approach that utilizes the implicit clustering of the Nearest Neighbor Contrastive Learning (NNCLR) objective to induce abstraction in the topmost layers of a pre-trained MAE. MAE-CT tunes the rich features such that they form semantic clusters of objects without using any labels. Notably, MAE-CT does not rely on hand-crafted augmentations and frequently achieves its best performances while using only minimal augmentations (crop & flip). Further, MAE-CT is compute efficient as it requires at most 10% overhead compared to MAE re-training. Applied to large and huge Vision Transformer (ViT) models, MAE-CT excels over previous self-supervised methods trained on ImageNet in linear probing, k-NN and low-shot classification accuracy as well as in unsupervised clustering accuracy. With ViT-H/16 MAE-CT achieves a new state-of-the-art in linear probing of 82.2%.

Results

TaskDatasetMetricValueModel
Image ClusteringImageNetAccuracy58MAE-CT (ViT-H/16 best)
Image ClusteringImageNetNMI81.8MAE-CT (ViT-H/16 best)
Image ClusteringImageNetAccuracy57.1MAE-CT (ViT-H/16 mean)
Image ClusteringImageNetNMI81.7MAE-CT (ViT-H/16 mean)
Image ClusteringImagenet-dog-15ARI0.879MAE-CT (best)
Image ClusteringImagenet-dog-15Accuracy0.943MAE-CT (best)
Image ClusteringImagenet-dog-15Image Size224MAE-CT (best)
Image ClusteringImagenet-dog-15NMI0.904MAE-CT (best)
Image ClusteringImagenet-dog-15ARI0.821MAE-CT (mean)
Image ClusteringImagenet-dog-15Accuracy0.874MAE-CT (mean)
Image ClusteringImagenet-dog-15Image Size224MAE-CT (mean)
Image ClusteringImagenet-dog-15NMI0.882MAE-CT (mean)

Related Papers

Tri-Learn Graph Fusion Network for Attributed Graph Clustering2025-07-18SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Ranking Vectors Clustering: Theory and Applications2025-07-16Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16LLM-Driven Dual-Level Multi-Interest Modeling for Recommendation2025-07-15