TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Context Autoencoder for Self-Supervised Representation Lea...

Context Autoencoder for Self-Supervised Representation Learning

Xiaokang Chen, Mingyu Ding, Xiaodi Wang, Ying Xin, Shentong Mo, Yunhao Wang, Shumin Han, Ping Luo, Gang Zeng, Jingdong Wang

2022-02-07Self-Supervised Image ClassificationRepresentation LearningSelf-Supervised LearningSemantic SegmentationInstance Segmentationobject-detectionObject Detection
PaperPDFCodeCodeCode(official)CodeCodeCode

Abstract

We present a novel masked image modeling (MIM) approach, context autoencoder (CAE), for self-supervised representation pretraining. We pretrain an encoder by making predictions in the encoded representation space. The pretraining tasks include two tasks: masked representation prediction - predict the representations for the masked patches, and masked patch reconstruction - reconstruct the masked patches. The network is an encoder-regressor-decoder architecture: the encoder takes the visible patches as input; the regressor predicts the representations of the masked patches, which are expected to be aligned with the representations computed from the encoder, using the representations of visible patches and the positions of visible and masked patches; the decoder reconstructs the masked patches from the predicted encoded representations. The CAE design encourages the separation of learning the encoder (representation) from completing the pertaining tasks: masked representation prediction and masked patch reconstruction tasks, and making predictions in the encoded representation space empirically shows the benefit to representation learning. We demonstrate the effectiveness of our CAE through superior transfer performance in downstream tasks: semantic segmentation, object detection and instance segmentation, and classification. The code will be available at https://github.com/Atten4Vis/CAE.

Results

TaskDatasetMetricValueModel
Semantic SegmentationADE20KValidation mIoU54.7CAE (ViT-L, UperNet)
Object DetectionCOCO minivalbox AP54.5CAE (ViT-L, Mask R-CNN, 1x schedule)
3DCOCO minivalbox AP54.5CAE (ViT-L, Mask R-CNN, 1x schedule)
2D ClassificationCOCO minivalbox AP54.5CAE (ViT-L, Mask R-CNN, 1x schedule)
2D Object DetectionCOCO minivalbox AP54.5CAE (ViT-L, Mask R-CNN, 1x schedule)
10-shot image generationADE20KValidation mIoU54.7CAE (ViT-L, UperNet)
16kCOCO minivalbox AP54.5CAE (ViT-L, Mask R-CNN, 1x schedule)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17