TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MultiMAE: Multi-modal Multi-task Masked Autoencoders

MultiMAE: Multi-modal Multi-task Masked Autoencoders

Roman Bachmann, David Mizrahi, Andrei Atanov, Amir Zamir

2022-04-04Image ClassificationSemantic SegmentationDepth Estimation
PaperPDFCode(official)

Abstract

We propose a pre-training strategy called Multi-modal Multi-task Masked Autoencoders (MultiMAE). It differs from standard Masked Autoencoding in two key aspects: I) it can optionally accept additional modalities of information in the input besides the RGB image (hence "multi-modal"), and II) its training objective accordingly includes predicting multiple outputs besides the RGB image (hence "multi-task"). We make use of masking (across image patches and input modalities) to make training MultiMAE tractable as well as to ensure cross-modality predictive coding is indeed learned by the network. We show this pre-training strategy leads to a flexible, simple, and efficient framework with improved transfer results to downstream tasks. In particular, the same exact pre-trained network can be flexibly used when additional information besides RGB images is available or when no information other than RGB is available - in all configurations yielding competitive to or significantly better results than the baselines. To avoid needing training datasets with multiple modalities and tasks, we train MultiMAE entirely using pseudo labeling, which makes the framework widely applicable to any RGB dataset. The experiments are performed on multiple transfer tasks (image classification, semantic segmentation, depth estimation) and datasets (ImageNet, ADE20K, Taskonomy, Hypersim, NYUv2). The results show an intriguingly impressive capability by the model in cross-modal/task predictive coding and transfer.

Results

TaskDatasetMetricValueModel
Semantic SegmentationADE20K valmIoU46.2MultiMAE (ViT-B)
Semantic SegmentationHypersimmIoU37MultiMAE (ViT-B)
Semantic SegmentationHypersimmIoU36.5MAE (ViT-B)
Semantic SegmentationHypersimmIoU32.5DINO (ViT-B)
Semantic SegmentationHypersimmIoU31.7MoCo-v3 (ViT-B)
Semantic SegmentationADE20KValidation mIoU46.2MultiMAE (ViT-B)
10-shot image generationADE20K valmIoU46.2MultiMAE (ViT-B)
10-shot image generationHypersimmIoU37MultiMAE (ViT-B)
10-shot image generationHypersimmIoU36.5MAE (ViT-B)
10-shot image generationHypersimmIoU32.5DINO (ViT-B)
10-shot image generationHypersimmIoU31.7MoCo-v3 (ViT-B)
10-shot image generationADE20KValidation mIoU46.2MultiMAE (ViT-B)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17