TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LayerNAS: Neural Architecture Search in Polynomial Complex...

LayerNAS: Neural Architecture Search in Polynomial Complexity

Yicheng Fan, Dana Alon, Jingyue Shen, Daiyi Peng, Keshav Kumar, Yun Long, Xin Wang, Fotis Iliopoulos, Da-Cheng Juan, Erik Vee

2023-04-23Neural Architecture SearchCombinatorial Optimization
PaperPDF

Abstract

Neural Architecture Search (NAS) has become a popular method for discovering effective model architectures, especially for target hardware. As such, NAS methods that find optimal architectures under constraints are essential. In our paper, we propose LayerNAS to address the challenge of multi-objective NAS by transforming it into a combinatorial optimization problem, which effectively constrains the search complexity to be polynomial. For a model architecture with $L$ layers, we perform layerwise-search for each layer, selecting from a set of search options $\mathbb{S}$. LayerNAS groups model candidates based on one objective, such as model size or latency, and searches for the optimal model based on another objective, thereby splitting the cost and reward elements of the search. This approach limits the search complexity to $ O(H \cdot |\mathbb{S}| \cdot L) $, where $H$ is a constant set in LayerNAS. Our experiments show that LayerNAS is able to consistently discover superior models across a variety of search spaces in comparison to strong baselines, including search spaces derived from NATS-Bench, MobileNetV2 and MobileNetV3.

Results

TaskDatasetMetricValueModel
Neural Architecture SearchNATS-Bench Size, CIFAR-10Test Accuracy93.2LayerNAS
Neural Architecture SearchNATS-Bench Size, CIFAR-10Validation Accuracy0.844LayerNAS
Neural Architecture SearchNATS-Bench Size, ImageNet16-120Test Accuracy45.37LayerNAS
Neural Architecture SearchNATS-Bench Size, ImageNet16-120Validation Accuracy38.12LayerNAS
Neural Architecture SearchNATS-Bench Size, CIFAR-100Test Accuracy70.64LayerNAS
Neural Architecture SearchNATS-Bench Size, CIFAR-100Validation Accuracy60.67LayerNAS
Neural Architecture SearchImageNetTop-1 Error Rate21.4LayerNAS-600M
Neural Architecture SearchImageNetTop-1 Error Rate22.9LayerNAS-300M
Neural Architecture SearchImageNetTop-1 Error Rate24.4LayerNAS-220M
Neural Architecture SearchImageNetTop-1 Error Rate31LayerNAS-60M
AutoMLNATS-Bench Size, CIFAR-10Test Accuracy93.2LayerNAS
AutoMLNATS-Bench Size, CIFAR-10Validation Accuracy0.844LayerNAS
AutoMLNATS-Bench Size, ImageNet16-120Test Accuracy45.37LayerNAS
AutoMLNATS-Bench Size, ImageNet16-120Validation Accuracy38.12LayerNAS
AutoMLNATS-Bench Size, CIFAR-100Test Accuracy70.64LayerNAS
AutoMLNATS-Bench Size, CIFAR-100Validation Accuracy60.67LayerNAS
AutoMLImageNetTop-1 Error Rate21.4LayerNAS-600M
AutoMLImageNetTop-1 Error Rate22.9LayerNAS-300M
AutoMLImageNetTop-1 Error Rate24.4LayerNAS-220M
AutoMLImageNetTop-1 Error Rate31LayerNAS-60M

Related Papers

DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17Large Language Models for Combinatorial Optimization: A Systematic Review2025-07-04LRM-1B: Towards Large Routing Model2025-07-04Higher-Order Neuromorphic Ising Machines -- Autoencoders and Fowler-Nordheim Annealers are all you need for Scalability2025-06-24AnalogNAS-Bench: A NAS Benchmark for Analog In-Memory Computing2025-06-23From Tiny Machine Learning to Tiny Deep Learning: A Survey2025-06-21On Training-Test (Mis)alignment in Unsupervised Combinatorial Optimization: Observation, Empirical Exploration, and Analysis2025-06-20HeurAgenix: Leveraging LLMs for Solving Complex Combinatorial Optimization Challenges2025-06-18