TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MetaFormer Baselines for Vision

MetaFormer Baselines for Vision

Weihao Yu, Chenyang Si, Pan Zhou, Mi Luo, Yichen Zhou, Jiashi Feng, Shuicheng Yan, Xinchao Wang

2022-10-24Image ClassificationDomain Generalization
PaperPDFCode(official)CodeCode(official)CodeCodeCodeCodeCode

Abstract

MetaFormer, the abstracted architecture of Transformer, has been found to play a significant role in achieving competitive performance. In this paper, we further explore the capacity of MetaFormer, again, without focusing on token mixer design: we introduce several baseline models under MetaFormer using the most basic or common mixers, and summarize our observations as follows: (1) MetaFormer ensures solid lower bound of performance. By merely adopting identity mapping as the token mixer, the MetaFormer model, termed IdentityFormer, achieves >80% accuracy on ImageNet-1K. (2) MetaFormer works well with arbitrary token mixers. When specifying the token mixer as even a random matrix to mix tokens, the resulting model RandFormer yields an accuracy of >81%, outperforming IdentityFormer. Rest assured of MetaFormer's results when new token mixers are adopted. (3) MetaFormer effortlessly offers state-of-the-art results. With just conventional token mixers dated back five years ago, the models instantiated from MetaFormer already beat state of the art. (a) ConvFormer outperforms ConvNeXt. Taking the common depthwise separable convolutions as the token mixer, the model termed ConvFormer, which can be regarded as pure CNNs, outperforms the strong CNN model ConvNeXt. (b) CAFormer sets new record on ImageNet-1K. By simply applying depthwise separable convolutions as token mixer in the bottom stages and vanilla self-attention in the top stages, the resulting model CAFormer sets a new record on ImageNet-1K: it achieves an accuracy of 85.5% at 224x224 resolution, under normal supervised training without external data or distillation. In our expedition to probe MetaFormer, we also find that a new activation, StarReLU, reduces 71% FLOPs of activation compared with GELU yet achieves better performance. We expect StarReLU to find great potential in MetaFormer-like models alongside other neural networks.

Results

TaskDatasetMetricValueModel
Domain AdaptationImageNet-RTop-1 Error Rate29.6CAFormer-B36 (IN21K, 384)
Domain AdaptationImageNet-RTop-1 Error Rate31.7CAFormer-B36 (IN21K)
Domain AdaptationImageNet-RTop-1 Error Rate33.5ConvFormer-B36 (IN21K, 384)
Domain AdaptationImageNet-RTop-1 Error Rate34.7ConvFormer-B36 (IN21K)
Domain AdaptationImageNet-RTop-1 Error Rate45CAFormer-B36 (384)
Domain AdaptationImageNet-RTop-1 Error Rate46.1CAFormer-B36
Domain AdaptationImageNet-RTop-1 Error Rate47.8ConvFormer-B36 (384)
Domain AdaptationImageNet-RTop-1 Error Rate48.9ConvFormer-B36
Domain AdaptationImageNet-ATop-1 accuracy %79.5CAFormer-B36 (IN-21K, 384)
Domain AdaptationImageNet-ATop-1 accuracy %73.5ConvFormer-B36 (IN-21K, 384)
Domain AdaptationImageNet-ATop-1 accuracy %69.4CAFormer-B36 (IN-21K)
Domain AdaptationImageNet-ATop-1 accuracy %63.3ConvFormer-B36 (IN-21K)
Domain AdaptationImageNet-ATop-1 accuracy %61.9CAFormer-B36 (384)
Domain AdaptationImageNet-ATop-1 accuracy %55.3ConvFormer-B36 (384)
Domain AdaptationImageNet-ATop-1 accuracy %48.5CAFormer-B36
Domain AdaptationImageNet-ATop-1 accuracy %40.1ConvFormer-B36
Domain AdaptationImageNet-Cmean Corruption Error (mCE)30.8CAFormer-B36 (IN21K, 384)
Domain AdaptationImageNet-Cmean Corruption Error (mCE)31.8CAFormer-B36 (IN21K)
Domain AdaptationImageNet-Cmean Corruption Error (mCE)35ConvFormer-B36 (IN21K)
Domain AdaptationImageNet-Cmean Corruption Error (mCE)42.6CAFormer-B36
Domain AdaptationImageNet-Cmean Corruption Error (mCE)46.3ConvFormer-B36
Domain AdaptationImageNet-SketchTop-1 accuracy54.5CAFormer-B36 (IN21K, 384)
Domain AdaptationImageNet-SketchTop-1 accuracy52.9ConvFormer-B36 (IN21K, 384)
Domain AdaptationImageNet-SketchTop-1 accuracy52.8CAFormer-B36 (IN21K)
Domain AdaptationImageNet-SketchTop-1 accuracy52.7ConvFormer-B36 (IN21K)
Domain AdaptationImageNet-SketchTop-1 accuracy42.5CAFormer-B36
Domain AdaptationImageNet-SketchTop-1 accuracy39.5ConvFormer-B36
Image ClassificationImageNetGFLOPs72.2CAFormer-B36 (384 res, 21K)
Image ClassificationImageNetGFLOPs66.5ConvFormer-B36 (384 res, 21K)
Image ClassificationImageNetGFLOPs42CAFormer-M36 (384 res, 21K)
Image ClassificationImageNetGFLOPs23.2CAFormer-B36 (224 res, 21K)
Image ClassificationImageNetGFLOPs22.6ConvFormer-B36 (224 res, 21K)
Image ClassificationImageNetGFLOPs26CAFormer-S36 (384 res, 21K)
Image ClassificationImageNetGFLOPs37.7ConvFormer-M36 (384 res, 21K)
Image ClassificationImageNetGFLOPs13.2CAFormer-M36 (224 res, 21K)
Image ClassificationImageNetGFLOPs22.4ConvFormer-S36 (384 res, 21K)
Image ClassificationImageNetGFLOPs72.2CAFormer-B36 (384 res)
Image ClassificationImageNetGFLOPs42CAFormer-M36 (384 res)
Image ClassificationImageNetGFLOPs12.8ConvFormer-M36 (224 res, 21K)
Image ClassificationImageNetGFLOPs8CAFormer-S36 (224 res, 21K)
Image ClassificationImageNetGFLOPs26CAFormer-S36 (384 res)
Image ClassificationImageNetGFLOPs66.5ConvFormer-B36 (384 res)
Image ClassificationImageNetGFLOPs37.7ConvFormer-M36 (384 res)
Image ClassificationImageNetGFLOPs23.2CAFormer-B36 (224 res)
Image ClassificationImageNetGFLOPs13.4CAFormer-S18 (384 res, 21K)
Image ClassificationImageNetGFLOPs7.6ConvFormer-S36 (224 res, 21K)
Image ClassificationImageNetGFLOPs22.4ConvFormer-S36 (384 res)
Image ClassificationImageNetGFLOPs13.2CAFormer-M36 (224 res)
Image ClassificationImageNetGFLOPs13.4CAFormer-S18 (384 res)
Image ClassificationImageNetGFLOPs11.6ConvFormer-S18 (384 res, 21K)
Image ClassificationImageNetGFLOPs22.6ConvFormer-B36 (224 res)
Image ClassificationImageNetGFLOPs8CAFormer-S36 (224 res)
Image ClassificationImageNetGFLOPs12.8ConvFormer-M36 (224 res)
Image ClassificationImageNetGFLOPs11.6ConvFormer-S18 (384 res)
Image ClassificationImageNetGFLOPs4.1CAFormer-S18 (224 res, 21K)
Image ClassificationImageNetGFLOPs7.6ConvFormer-S36 (224 res)
Image ClassificationImageNetGFLOPs3.9ConvFormer-S18 (224 res, 21K)
Image ClassificationImageNetGFLOPs4.1CAFormer-S18 (224 res)
Image ClassificationImageNetGFLOPs3.9ConvFormer-S18 (224 res)
Domain GeneralizationImageNet-RTop-1 Error Rate29.6CAFormer-B36 (IN21K, 384)
Domain GeneralizationImageNet-RTop-1 Error Rate31.7CAFormer-B36 (IN21K)
Domain GeneralizationImageNet-RTop-1 Error Rate33.5ConvFormer-B36 (IN21K, 384)
Domain GeneralizationImageNet-RTop-1 Error Rate34.7ConvFormer-B36 (IN21K)
Domain GeneralizationImageNet-RTop-1 Error Rate45CAFormer-B36 (384)
Domain GeneralizationImageNet-RTop-1 Error Rate46.1CAFormer-B36
Domain GeneralizationImageNet-RTop-1 Error Rate47.8ConvFormer-B36 (384)
Domain GeneralizationImageNet-RTop-1 Error Rate48.9ConvFormer-B36
Domain GeneralizationImageNet-ATop-1 accuracy %79.5CAFormer-B36 (IN-21K, 384)
Domain GeneralizationImageNet-ATop-1 accuracy %73.5ConvFormer-B36 (IN-21K, 384)
Domain GeneralizationImageNet-ATop-1 accuracy %69.4CAFormer-B36 (IN-21K)
Domain GeneralizationImageNet-ATop-1 accuracy %63.3ConvFormer-B36 (IN-21K)
Domain GeneralizationImageNet-ATop-1 accuracy %61.9CAFormer-B36 (384)
Domain GeneralizationImageNet-ATop-1 accuracy %55.3ConvFormer-B36 (384)
Domain GeneralizationImageNet-ATop-1 accuracy %48.5CAFormer-B36
Domain GeneralizationImageNet-ATop-1 accuracy %40.1ConvFormer-B36
Domain GeneralizationImageNet-Cmean Corruption Error (mCE)30.8CAFormer-B36 (IN21K, 384)
Domain GeneralizationImageNet-Cmean Corruption Error (mCE)31.8CAFormer-B36 (IN21K)
Domain GeneralizationImageNet-Cmean Corruption Error (mCE)35ConvFormer-B36 (IN21K)
Domain GeneralizationImageNet-Cmean Corruption Error (mCE)42.6CAFormer-B36
Domain GeneralizationImageNet-Cmean Corruption Error (mCE)46.3ConvFormer-B36
Domain GeneralizationImageNet-SketchTop-1 accuracy54.5CAFormer-B36 (IN21K, 384)
Domain GeneralizationImageNet-SketchTop-1 accuracy52.9ConvFormer-B36 (IN21K, 384)
Domain GeneralizationImageNet-SketchTop-1 accuracy52.8CAFormer-B36 (IN21K)
Domain GeneralizationImageNet-SketchTop-1 accuracy52.7ConvFormer-B36 (IN21K)
Domain GeneralizationImageNet-SketchTop-1 accuracy42.5CAFormer-B36
Domain GeneralizationImageNet-SketchTop-1 accuracy39.5ConvFormer-B36

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17