TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Generic Neural Architecture Search via Regression

Generic Neural Architecture Search via Regression

Yuhong Li, Cong Hao, Pan Li, JinJun Xiong, Deming Chen

2021-08-04NeurIPS 2021 12Image ClassificationregressionNeural Architecture Search
PaperPDFCode(official)Code

Abstract

Most existing neural architecture search (NAS) algorithms are dedicated to and evaluated by the downstream tasks, e.g., image classification in computer vision. However, extensive experiments have shown that, prominent neural architectures, such as ResNet in computer vision and LSTM in natural language processing, are generally good at extracting patterns from the input data and perform well on different downstream tasks. In this paper, we attempt to answer two fundamental questions related to NAS. (1) Is it necessary to use the performance of specific downstream tasks to evaluate and search for good neural architectures? (2) Can we perform NAS effectively and efficiently while being agnostic to the downstream tasks? To answer these questions, we propose a novel and generic NAS framework, termed Generic NAS (GenNAS). GenNAS does not use task-specific labels but instead adopts regression on a set of manually designed synthetic signal bases for architecture evaluation. Such a self-supervised regression task can effectively evaluate the intrinsic power of an architecture to capture and transform the input signal patterns, and allow more sufficient usage of training samples. Extensive experiments across 13 CNN search spaces and one NLP space demonstrate the remarkable efficiency of GenNAS using regression, in terms of both evaluating the neural architectures (quantified by the ranking correlation Spearman's rho between the approximated performances and the downstream task performances) and the convergence speed for training (within a few seconds).

Results

TaskDatasetMetricValueModel
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Accuracy (Test)45.59GenNAS
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Search time (s)1080GenNAS
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Test)94.18GenNAS
Neural Architecture SearchNAS-Bench-201, CIFAR-10Search time (s)1080GenNAS
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Test)72.56GenNAS
Neural Architecture SearchNAS-Bench-201, CIFAR-100Search time (s)1080GenNAS
Neural Architecture SearchNAS-Bench-101Spearman Correlation0.87GenNAS
AutoMLNAS-Bench-201, ImageNet-16-120Accuracy (Test)45.59GenNAS
AutoMLNAS-Bench-201, ImageNet-16-120Search time (s)1080GenNAS
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Test)94.18GenNAS
AutoMLNAS-Bench-201, CIFAR-10Search time (s)1080GenNAS
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Test)72.56GenNAS
AutoMLNAS-Bench-201, CIFAR-100Search time (s)1080GenNAS
AutoMLNAS-Bench-101Spearman Correlation0.87GenNAS

Related Papers

Language Integration in Fine-Tuning Multimodal Large Language Models for Image-Based Regression2025-07-20Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17Neural Network-Guided Symbolic Regression for Interpretable Descriptor Discovery in Perovskite Catalysts2025-07-16