TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Scaling Vision with Sparse Mixture of Experts

Scaling Vision with Sparse Mixture of Experts

Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, André Susano Pinto, Daniel Keysers, Neil Houlsby

2021-06-10NeurIPS 2021 12Image ClassificationFew-Shot Image Classification
PaperPDFCode(official)

Abstract

Sparsely-gated Mixture of Experts networks (MoEs) have demonstrated excellent scalability in Natural Language Processing. In Computer Vision, however, almost all performant networks are "dense", that is, every input is processed by every parameter. We present a Vision MoE (V-MoE), a sparse version of the Vision Transformer, that is scalable and competitive with the largest dense networks. When applied to image recognition, V-MoE matches the performance of state-of-the-art networks, while requiring as little as half of the compute at inference time. Further, we propose an extension to the routing algorithm that can prioritize subsets of each input across the entire batch, leading to adaptive per-image compute. This allows V-MoE to trade-off performance and compute smoothly at test-time. Finally, we demonstrate the potential of V-MoE to scale vision models, and train a 15B parameter model that attains 90.35% on ImageNet.

Results

TaskDatasetMetricValueModel
Image ClassificationJFT-300Mprec@160.62V-MoE-H/14 (Every-2)
Image ClassificationJFT-300Mprec@160.12V-MoE-H/14 (Last-5)
Image ClassificationJFT-300Mprec@157.65V-MoE-L/16 (Every-2)
Image ClassificationJFT-300Mprec@156.68VIT-H/14
Image ClassificationImageNet - 5-shotTop 1 Accuracy82.78ViT-MoE-15B (Every-2)
Image ClassificationImageNet - 5-shotTop 1 Accuracy78.21V-MoE-H/14 (Every-2)
Image ClassificationImageNet - 5-shotTop 1 Accuracy78.08V-MoE-H/14 (Last-5)
Image ClassificationImageNet - 5-shotTop 1 Accuracy77.1V-MoE-L/16 (Every-2)
Image ClassificationImageNet - 5-shotTop 1 Accuracy76.95VIT-H/14
Image ClassificationImageNet - 10-shotTop 1 Accuracy84.29ViT-MoE-15B (Every-2)
Image ClassificationImageNet - 10-shotTop 1 Accuracy80.33V-MoE-H/14 (Every-2)
Image ClassificationImageNet - 10-shotTop 1 Accuracy80.1V-MoE-H/14 (Last-5)
Image ClassificationImageNet - 10-shotTop 1 Accuracy79.01VIT-H/14
Image ClassificationImageNet - 1-shotTop 1 Accuracy68.66ViT-MoE-15B (Every-2)
Image ClassificationImageNet - 1-shotTop 1 Accuracy63.38V-MoE-H/14 (Every-2)
Image ClassificationImageNet - 1-shotTop 1 Accuracy62.95V-MoE-H/14 (Last-5)
Image ClassificationImageNet - 1-shotTop 1 Accuracy62.41V-MoE-L/16 (Every-2)
Image ClassificationImageNet - 1-shotTop 1 Accuracy62.34VIT-H/14
Few-Shot Image ClassificationImageNet - 5-shotTop 1 Accuracy82.78ViT-MoE-15B (Every-2)
Few-Shot Image ClassificationImageNet - 5-shotTop 1 Accuracy78.21V-MoE-H/14 (Every-2)
Few-Shot Image ClassificationImageNet - 5-shotTop 1 Accuracy78.08V-MoE-H/14 (Last-5)
Few-Shot Image ClassificationImageNet - 5-shotTop 1 Accuracy77.1V-MoE-L/16 (Every-2)
Few-Shot Image ClassificationImageNet - 5-shotTop 1 Accuracy76.95VIT-H/14
Few-Shot Image ClassificationImageNet - 10-shotTop 1 Accuracy84.29ViT-MoE-15B (Every-2)
Few-Shot Image ClassificationImageNet - 10-shotTop 1 Accuracy80.33V-MoE-H/14 (Every-2)
Few-Shot Image ClassificationImageNet - 10-shotTop 1 Accuracy80.1V-MoE-H/14 (Last-5)
Few-Shot Image ClassificationImageNet - 10-shotTop 1 Accuracy79.01VIT-H/14
Few-Shot Image ClassificationImageNet - 1-shotTop 1 Accuracy68.66ViT-MoE-15B (Every-2)
Few-Shot Image ClassificationImageNet - 1-shotTop 1 Accuracy63.38V-MoE-H/14 (Every-2)
Few-Shot Image ClassificationImageNet - 1-shotTop 1 Accuracy62.95V-MoE-H/14 (Last-5)
Few-Shot Image ClassificationImageNet - 1-shotTop 1 Accuracy62.41V-MoE-L/16 (Every-2)
Few-Shot Image ClassificationImageNet - 1-shotTop 1 Accuracy62.34VIT-H/14

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Hashed Watermark as a Filter: Defeating Forging and Overwriting Attacks in Weight-based Neural Network Watermarking2025-07-15Transferring Styles for Reduced Texture Bias and Improved Robustness in Semantic Segmentation Networks2025-07-14FedGSCA: Medical Federated Learning with Global Sample Selector and Client Adaptive Adjuster under Label Noise2025-07-13