TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SCARLET-NAS: Bridging the Gap between Stability and Scalab...

SCARLET-NAS: Bridging the Gap between Stability and Scalability in Weight-sharing Neural Architecture Search

Xiangxiang Chu, Bo Zhang, Qingyuan Li, Ruijun Xu, Xudong Li

2019-08-16Image ClassificationAutoMLNeural Architecture Search
PaperPDFCode(official)

Abstract

To discover powerful yet compact models is an important goal of neural architecture search. Previous two-stage one-shot approaches are limited by search space with a fixed depth. It seems handy to include an additional skip connection in the search space to make depths variable. However, it creates a large range of perturbation during supernet training and it has difficulty giving a confident ranking for subnetworks. In this paper, we discover that skip connections bring about significant feature inconsistency compared with other operations, which potentially degrades the supernet performance. Based on this observation, we tackle the problem by imposing an equivariant learnable stabilizer to homogenize such disparities. Experiments show that our proposed stabilizer helps to improve the supernet's convergence as well as ranking performance. With an evolutionary search backend that incorporates the stabilized supernet as an evaluator, we derive a family of state-of-the-art architectures, the SCARLET series of several depths, especially SCARLET-A obtains 76.9% top-1 accuracy on ImageNet. Code is available at https://github.com/xiaomi-automl/ScarletNAS.

Results

TaskDatasetMetricValueModel
Neural Architecture SearchImageNetAccuracy76.9SCARLET-A
Neural Architecture SearchImageNetTop-1 Error Rate23.1SCARLET-A
Neural Architecture SearchImageNetAccuracy76.3SCARLET-B
Neural Architecture SearchImageNetTop-1 Error Rate23.7SCARLET-B
Neural Architecture SearchImageNetAccuracy75.6SCARLET-C
Neural Architecture SearchImageNetTop-1 Error Rate24.4SCARLET-C
Image ClassificationImageNetGFLOPs8.4SCARLET-A4
Image ClassificationImageNetGFLOPs0.73SCARLET-A
Image ClassificationImageNetGFLOPs0.658SCARLET-B
Image ClassificationImageNetGFLOPs0.56SCARLET-C
AutoMLImageNetAccuracy76.9SCARLET-A
AutoMLImageNetTop-1 Error Rate23.1SCARLET-A
AutoMLImageNetAccuracy76.3SCARLET-B
AutoMLImageNetTop-1 Error Rate23.7SCARLET-B
AutoMLImageNetAccuracy75.6SCARLET-C
AutoMLImageNetTop-1 Error Rate24.4SCARLET-C

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17Imbalanced Regression Pipeline Recommendation2025-07-16Hashed Watermark as a Filter: Defeating Forging and Overwriting Attacks in Weight-based Neural Network Watermarking2025-07-15