TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Aggregated Residual Transformations for Deep Neural Networks

Aggregated Residual Transformations for Deep Neural Networks

Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He

2016-11-16CVPR 2017 7Image ClassificationDomain GeneralizationGeneral Classification
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode(official)CodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.

Results

TaskDatasetMetricValueModel
Domain AdaptationVizWiz-ClassificationAccuracy - All Images51.7ResNeXt-101 32x16d
Domain AdaptationVizWiz-ClassificationAccuracy - Clean Images54.8ResNeXt-101 32x16d
Domain AdaptationVizWiz-ClassificationAccuracy - Corrupted Images48.1ResNeXt-101 32x16d
Image ClassificationGasHisSDBAccuracy98.59ResNeXt-50-32x4d
Image ClassificationGasHisSDBF1-Score99.25ResNeXt-50-32x4d
Image ClassificationGasHisSDBPrecision99.94ResNeXt-50-32x4d
Image ClassificationImageNetGFLOPs31.5ResNeXt-101 64x4
Image ClassificationImageNetTop 5 Accuracy94.7ResNeXt-101 64x4
Domain GeneralizationVizWiz-ClassificationAccuracy - All Images51.7ResNeXt-101 32x16d
Domain GeneralizationVizWiz-ClassificationAccuracy - Clean Images54.8ResNeXt-101 32x16d
Domain GeneralizationVizWiz-ClassificationAccuracy - Corrupted Images48.1ResNeXt-101 32x16d

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17