TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Exploring Target Representations for Masked Autoencoders

Exploring Target Representations for Masked Autoencoders

Xingbin Liu, Jinghao Zhou, Tao Kong, Xianming Lin, Rongrong Ji

2022-09-08Self-Supervised Image ClassificationImage ClassificationRepresentation LearningTransfer LearningSemantic SegmentationInstance SegmentationKnowledge Distillationobject-detectionObject Detection
PaperPDFCode(official)

Abstract

Masked autoencoders have become popular training paradigms for self-supervised visual representation learning. These models randomly mask a portion of the input and reconstruct the masked portion according to the target representations. In this paper, we first show that a careful choice of the target representation is unnecessary for learning good representations, since different targets tend to derive similarly behaved models. Driven by this observation, we propose a multi-stage masked distillation pipeline and use a randomly initialized model as the teacher, enabling us to effectively train high-capacity models without any efforts to carefully design target representations. Interestingly, we further explore using teachers of larger capacity, obtaining distilled students with remarkable transferring ability. On different tasks of classification, transfer learning, object detection, and semantic segmentation, the proposed method to perform masked knowledge distillation with bootstrapped teachers (dBOT) outperforms previous self-supervised methods by nontrivial margins. We hope our findings, as well as the proposed method, could motivate people to rethink the roles of target representations in pre-training masked autoencoders.The code and pre-trained models are publicly available at https://github.com/liuxingbin/dbot.

Results

TaskDatasetMetricValueModel
Semantic SegmentationADE20KValidation mIoU56.2dBOT ViT-L (CLIP)
Semantic SegmentationADE20KValidation mIoU55.2dBOT ViT-L
Semantic SegmentationADE20KValidation mIoU52.9dBOT ViT-B (CLIP)
Semantic SegmentationADE20KValidation mIoU50.8dBOT ViT-B
Object DetectionCOCO test-devbox mAP56.8dBOT ViT-L (CLIP)
Object DetectionCOCO test-devbox mAP56.1dBOT ViT-L
Object DetectionCOCO test-devbox mAP53.6dBOT ViT-B (CLIP)
Object DetectionCOCO test-devbox mAP53.5dBOT ViT-B
3DCOCO test-devbox mAP56.8dBOT ViT-L (CLIP)
3DCOCO test-devbox mAP56.1dBOT ViT-L
3DCOCO test-devbox mAP53.6dBOT ViT-B (CLIP)
3DCOCO test-devbox mAP53.5dBOT ViT-B
Instance SegmentationCOCO test-devmask AP48.8dBOT ViT-L (CLIP)
Instance SegmentationCOCO test-devmask AP48.3dBOT ViT-L
Instance SegmentationCOCO test-devmask AP46.3dBOT ViT-B
Instance SegmentationCOCO test-devmask AP46.2dBOT ViT-B (CLIP)
2D ClassificationCOCO test-devbox mAP56.8dBOT ViT-L (CLIP)
2D ClassificationCOCO test-devbox mAP56.1dBOT ViT-L
2D ClassificationCOCO test-devbox mAP53.6dBOT ViT-B (CLIP)
2D ClassificationCOCO test-devbox mAP53.5dBOT ViT-B
2D Object DetectionCOCO test-devbox mAP56.8dBOT ViT-L (CLIP)
2D Object DetectionCOCO test-devbox mAP56.1dBOT ViT-L
2D Object DetectionCOCO test-devbox mAP53.6dBOT ViT-B (CLIP)
2D Object DetectionCOCO test-devbox mAP53.5dBOT ViT-B
10-shot image generationADE20KValidation mIoU56.2dBOT ViT-L (CLIP)
10-shot image generationADE20KValidation mIoU55.2dBOT ViT-L
10-shot image generationADE20KValidation mIoU52.9dBOT ViT-B (CLIP)
10-shot image generationADE20KValidation mIoU50.8dBOT ViT-B
16kCOCO test-devbox mAP56.8dBOT ViT-L (CLIP)
16kCOCO test-devbox mAP56.1dBOT ViT-L
16kCOCO test-devbox mAP53.6dBOT ViT-B (CLIP)
16kCOCO test-devbox mAP53.5dBOT ViT-B

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17