TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Random Search and Reproducibility for Neural Architecture ...

Random Search and Reproducibility for Neural Architecture Search

Liam Li, Ameet Talwalkar

2019-02-20Hyperparameter OptimizationNeural Architecture Search
PaperPDFCodeCodeCodeCode

Abstract

Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures. In this work, in order to help ground the empirical results in this field, we propose new NAS baselines that build off the following observations: (i) NAS is a specialized hyperparameter optimization problem; and (ii) random search is a competitive baseline for hyperparameter optimization. Leveraging these observations, we evaluate both random search with early-stopping and a novel random search with weight-sharing algorithm on two standard NAS benchmarks---PTB and CIFAR-10. Our results show that random search with early-stopping is a competitive NAS baseline, e.g., it performs at least as well as ENAS, a leading NAS method, on both benchmarks. Additionally, random search with weight-sharing outperforms random search with early-stopping, achieving a state-of-the-art NAS result on PTB and a highly competitive result on CIFAR-10. Finally, we explore the existing reproducibility issues of published NAS results. We note the lack of source material needed to exactly reproduce these results, and further discuss the robustness of published results given the various sources of variability in NAS experimental setups. Relatedly, we provide all information (code, random seeds, documentation) needed to exactly reproduce our results, and report our random search with weight-sharing results for each benchmark on multiple runs.

Results

TaskDatasetMetricValueModel
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Accuracy (Test)31.14RSPS
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Search time (s)7587RSPS
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Test)87.66RSPS
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Val)84.16RSPS
Neural Architecture SearchNAS-Bench-201, CIFAR-10Search time (s)7587RSPS
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Test)58.33RSPS
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Val)59RSPS
Neural Architecture SearchNAS-Bench-201, CIFAR-100Search time (s)7587RSPS
AutoMLNAS-Bench-201, ImageNet-16-120Accuracy (Test)31.14RSPS
AutoMLNAS-Bench-201, ImageNet-16-120Search time (s)7587RSPS
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Test)87.66RSPS
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Val)84.16RSPS
AutoMLNAS-Bench-201, CIFAR-10Search time (s)7587RSPS
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Test)58.33RSPS
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Val)59RSPS
AutoMLNAS-Bench-201, CIFAR-100Search time (s)7587RSPS

Related Papers

DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Overtuning in Hyperparameter Optimization2025-06-24Quantum-Classical Hybrid Quantized Neural Network2025-06-23AnalogNAS-Bench: A NAS Benchmark for Analog In-Memory Computing2025-06-23From Tiny Machine Learning to Tiny Deep Learning: A Survey2025-06-21One-Shot Neural Architecture Search with Network Similarity Directed Initialization for Pathological Image Classification2025-06-17DDS-NAS: Dynamic Data Selection within Neural Architecture Search via On-line Hard Example Mining applied to Image Classification2025-06-17