TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning Where To Look -- Generative NAS is Surprisingly E...

Learning Where To Look -- Generative NAS is Surprisingly Efficient

Jovita Lukasik, Steffen Jung, Margret Keuper

2022-03-16Neural Architecture Search
PaperPDFCode(official)

Abstract

The efficient, automated search for well-performing neural architectures (NAS) has drawn increasing attention in the recent past. Thereby, the predominant research objective is to reduce the necessity of costly evaluations of neural architectures while efficiently exploring large search spaces. To this aim, surrogate models embed architectures in a latent space and predict their performance, while generative models for neural architectures enable optimization-based search within the latent space the generator draws from. Both, surrogate and generative models, have the aim of facilitating query-efficient search in a well-structured latent space. In this paper, we further improve the trade-off between query-efficiency and promising architecture generation by leveraging advantages from both, efficient surrogate models and generative design. To this end, we propose a generative model, paired with a surrogate predictor, that iteratively learns to generate samples from increasingly promising latent subspaces. This approach leads to very effective and efficient architecture search, while keeping the query amount low. In addition, our approach allows in a straightforward manner to jointly optimize for multiple objectives such as accuracy and hardware latency. We show the benefit of this approach not only w.r.t. the optimization of architectures for highest classification accuracy but also in the context of hardware constraints and outperform state-of-the-art methods on several NAS benchmarks for single and multiple objectives. We also achieve state-of-the-art performance on ImageNet. The code is available at http://github.com/jovitalukasik/AG-Net .

Results

TaskDatasetMetricValueModel
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Accuracy (Test)46.42AG-Net
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Accuracy (Val)46.73AG-Net
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Test)94.37AG-Net
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Val)91.61AG-Net
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Test)73.51AG-Net
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Val)73.49AG-Net
AutoMLNAS-Bench-201, ImageNet-16-120Accuracy (Test)46.42AG-Net
AutoMLNAS-Bench-201, ImageNet-16-120Accuracy (Val)46.73AG-Net
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Test)94.37AG-Net
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Val)91.61AG-Net
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Test)73.51AG-Net
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Val)73.49AG-Net

Related Papers

DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17AnalogNAS-Bench: A NAS Benchmark for Analog In-Memory Computing2025-06-23From Tiny Machine Learning to Tiny Deep Learning: A Survey2025-06-21One-Shot Neural Architecture Search with Network Similarity Directed Initialization for Pathological Image Classification2025-06-17DDS-NAS: Dynamic Data Selection within Neural Architecture Search via On-line Hard Example Mining applied to Image Classification2025-06-17MARCO: Hardware-Aware Neural Architecture Search for Edge Devices with Multi-Agent Reinforcement Learning and Conformal Prediction Filtering2025-06-16Finding Optimal Kernel Size and Dimension in Convolutional Neural Networks An Architecture Optimization Approach2025-06-16Directed Acyclic Graph Convolutional Networks2025-06-13