TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Neural Architecture Search on ImageNet in Four GPU Hours: ...

Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective

Wuyang Chen, Xinyu Gong, Zhangyang Wang

2021-02-23ICLR 2021 1Neural Architecture Search
PaperPDFCodeCodeCode(official)Code

Abstract

Neural Architecture Search (NAS) has been explosively studied to automate the discovery of top-performer neural networks. Current works require heavy training of supernet or intensive architecture evaluations, thus suffering from heavy resource consumption and often incurring search bias due to truncated training or approximations. Can we select the best neural architectures without involving any training and eliminate a drastic portion of the search cost? We provide an affirmative answer, by proposing a novel framework called training-free neural architecture search (TE-NAS). TE-NAS ranks architectures by analyzing the spectrum of the neural tangent kernel (NTK) and the number of linear regions in the input space. Both are motivated by recent theory advances in deep networks and can be computed without any training and any label. We show that: (1) these two measurements imply the trainability and expressivity of a neural network; (2) they strongly correlate with the network's test accuracy. Further on, we design a pruning-based NAS mechanism to achieve a more flexible and superior trade-off between the trainability and expressivity during the search. In NAS-Bench-201 and DARTS search spaces, TE-NAS completes high-quality search but only costs 0.5 and 4 GPU hours with one 1080Ti on CIFAR-10 and ImageNet, respectively. We hope our work inspires more attempts in bridging the theoretical findings of deep networks and practical impacts in real NAS applications. Code is available at: https://github.com/VITA-Group/TENAS.

Results

TaskDatasetMetricValueModel
Neural Architecture SearchNATS-Bench Topology, CIFAR-10Test Accuracy93.9TE-NAS (Chen et al., 2021)
Neural Architecture SearchNATS-Bench Topology, CIFAR-100Test Accuracy71.24TE-NAS (Chen et al., 2021)
Neural Architecture SearchNATS-Bench Topology, ImageNet16-120Test Accuracy42.38TE-NAS (Chen et al., 2021)
AutoMLNATS-Bench Topology, CIFAR-10Test Accuracy93.9TE-NAS (Chen et al., 2021)
AutoMLNATS-Bench Topology, CIFAR-100Test Accuracy71.24TE-NAS (Chen et al., 2021)
AutoMLNATS-Bench Topology, ImageNet16-120Test Accuracy42.38TE-NAS (Chen et al., 2021)

Related Papers

DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17AnalogNAS-Bench: A NAS Benchmark for Analog In-Memory Computing2025-06-23From Tiny Machine Learning to Tiny Deep Learning: A Survey2025-06-21One-Shot Neural Architecture Search with Network Similarity Directed Initialization for Pathological Image Classification2025-06-17DDS-NAS: Dynamic Data Selection within Neural Architecture Search via On-line Hard Example Mining applied to Image Classification2025-06-17MARCO: Hardware-Aware Neural Architecture Search for Edge Devices with Multi-Agent Reinforcement Learning and Conformal Prediction Filtering2025-06-16Finding Optimal Kernel Size and Dimension in Convolutional Neural Networks An Architecture Optimization Approach2025-06-16Directed Acyclic Graph Convolutional Networks2025-06-13