TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Neural Architecture Search without Training

Neural Architecture Search without Training

Joseph Mellor, Jack Turner, Amos Storkey, Elliot J. Crowley

2020-06-08Neural Architecture Search
PaperPDFCodeCodeCode(official)

Abstract

The time and effort involved in hand-designing deep neural networks is immense. This has prompted the development of Neural Architecture Search (NAS) techniques to automate this design. However, NAS algorithms tend to be slow and expensive; they need to train vast numbers of candidate networks to inform the search process. This could be alleviated if we could partially predict a network's trained accuracy from its initial state. In this work, we examine the overlap of activations between datapoints in untrained networks and motivate how this can give a measure which is usefully indicative of a network's trained performance. We incorporate this measure into a simple algorithm that allows us to search for powerful networks without any training in a matter of seconds on a single GPU, and verify its effectiveness on NAS-Bench-101, NAS-Bench-201, NATS-Bench, and Network Design Spaces. Our approach can be readily combined with more expensive search methods; we examine a simple adaptation of regularised evolutionary search. Code for reproducing our experiments is available at https://github.com/BayesWatch/nas-without-training.

Results

TaskDatasetMetricValueModel
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Accuracy (Test)38.33NAS without training (N=10)
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Search time (s)1.7NAS without training (N=10)
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Accuracy (Test)36.37NAS without training (N=100)
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Search time (s)17.4NAS without training (N=100)
AutoMLNAS-Bench-201, ImageNet-16-120Accuracy (Test)38.33NAS without training (N=10)
AutoMLNAS-Bench-201, ImageNet-16-120Search time (s)1.7NAS without training (N=10)
AutoMLNAS-Bench-201, ImageNet-16-120Accuracy (Test)36.37NAS without training (N=100)
AutoMLNAS-Bench-201, ImageNet-16-120Search time (s)17.4NAS without training (N=100)

Related Papers

DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17AnalogNAS-Bench: A NAS Benchmark for Analog In-Memory Computing2025-06-23From Tiny Machine Learning to Tiny Deep Learning: A Survey2025-06-21One-Shot Neural Architecture Search with Network Similarity Directed Initialization for Pathological Image Classification2025-06-17DDS-NAS: Dynamic Data Selection within Neural Architecture Search via On-line Hard Example Mining applied to Image Classification2025-06-17MARCO: Hardware-Aware Neural Architecture Search for Edge Devices with Multi-Agent Reinforcement Learning and Conformal Prediction Filtering2025-06-16Finding Optimal Kernel Size and Dimension in Convolutional Neural Networks An Architecture Optimization Approach2025-06-16Directed Acyclic Graph Convolutional Networks2025-06-13