TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/CR-LSO: Convex Neural Architecture Optimization in the Lat...

CR-LSO: Convex Neural Architecture Optimization in the Latent Space of Graph Variational Autoencoder with Input Convex Neural Networks

Xuan Rao, Bo Zhao, Xiaosong Yi, Derong Liu

2022-11-11Neural Architecture Search
PaperPDFCode(official)

Abstract

In neural architecture search (NAS) methods based on latent space optimization (LSO), a deep generative model is trained to embed discrete neural architectures into a continuous latent space. In this case, different optimization algorithms that operate in the continuous space can be implemented to search neural architectures. However, the optimization of latent variables is challenging for gradient-based LSO since the mapping from the latent space to the architecture performance is generally non-convex. To tackle this problem, this paper develops a convexity regularized latent space optimization (CR-LSO) method, which aims to regularize the learning process of latent space in order to obtain a convex architecture performance mapping. Specifically, CR-LSO trains a graph variational autoencoder (G-VAE) to learn the continuous representations of discrete architectures. Simultaneously, the learning process of latent space is regularized by the guaranteed convexity of input convex neural networks (ICNNs). In this way, the G-VAE is forced to learn a convex mapping from the architecture representation to the architecture performance. Hereafter, the CR-LSO approximates the performance mapping using the ICNN and leverages the estimated gradient to optimize neural architecture representations. Experimental results on three popular NAS benchmarks show that CR-LSO achieves competitive evaluation results in terms of both computational complexity and architecture performance.

Results

TaskDatasetMetricValueModel
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Accuracy (Test)46.98CR-LSO
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Accuracy (Val)46.51CR-LSO
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Test)94.35CR-LSO
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Val)91.54CR-LSO
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Test)73.47CR-LSO
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Val)73.44CR-LSO
AutoMLNAS-Bench-201, ImageNet-16-120Accuracy (Test)46.98CR-LSO
AutoMLNAS-Bench-201, ImageNet-16-120Accuracy (Val)46.51CR-LSO
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Test)94.35CR-LSO
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Val)91.54CR-LSO
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Test)73.47CR-LSO
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Val)73.44CR-LSO

Related Papers

DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17AnalogNAS-Bench: A NAS Benchmark for Analog In-Memory Computing2025-06-23From Tiny Machine Learning to Tiny Deep Learning: A Survey2025-06-21One-Shot Neural Architecture Search with Network Similarity Directed Initialization for Pathological Image Classification2025-06-17DDS-NAS: Dynamic Data Selection within Neural Architecture Search via On-line Hard Example Mining applied to Image Classification2025-06-17MARCO: Hardware-Aware Neural Architecture Search for Edge Devices with Multi-Agent Reinforcement Learning and Conformal Prediction Filtering2025-06-16Finding Optimal Kernel Size and Dimension in Convolutional Neural Networks An Architecture Optimization Approach2025-06-16Directed Acyclic Graph Convolutional Networks2025-06-13