TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/NAS-LID: Efficient Neural Architecture Search with Local I...

NAS-LID: Efficient Neural Architecture Search with Local Intrinsic Dimension

Xin He, Jiangchao Yao, Yuxin Wang, Zhenheng Tang, Ka Chu Cheung, Simon See, Bo Han, Xiaowen Chu

2022-11-23Neural Architecture Search
PaperPDFCode(official)

Abstract

One-shot neural architecture search (NAS) substantially improves the search efficiency by training one supernet to estimate the performance of every possible child architecture (i.e., subnet). However, the inconsistency of characteristics among subnets incurs serious interference in the optimization, resulting in poor performance ranking correlation of subnets. Subsequent explorations decompose supernet weights via a particular criterion, e.g., gradient matching, to reduce the interference; yet they suffer from huge computational cost and low space separability. In this work, we propose a lightweight and effective local intrinsic dimension (LID)-based method NAS-LID. NAS-LID evaluates the geometrical properties of architectures by calculating the low-cost LID features layer-by-layer, and the similarity characterized by LID enjoys better separability compared with gradients, which thus effectively reduces the interference among subnets. Extensive experiments on NASBench-201 indicate that NAS-LID achieves superior performance with better efficiency. Specifically, compared to the gradient-driven method, NAS-LID can save up to 86% of GPU memory overhead when searching on NASBench-201. We also demonstrate the effectiveness of NAS-LID on ProxylessNAS and OFA spaces. Source code: https://github.com/marsggbo/NAS-LID.

Results

TaskDatasetMetricValueModel
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Test)92.9NAS-LID+RSPS
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Val)89.74NAS-LID+RSPS
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Test)69.39NAS-LID+RSPS
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Val)69.38NAS-LID+RSPS
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Test)92.9NAS-LID+RSPS
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Val)89.74NAS-LID+RSPS
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Test)69.39NAS-LID+RSPS
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Val)69.38NAS-LID+RSPS

Related Papers

DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17AnalogNAS-Bench: A NAS Benchmark for Analog In-Memory Computing2025-06-23From Tiny Machine Learning to Tiny Deep Learning: A Survey2025-06-21One-Shot Neural Architecture Search with Network Similarity Directed Initialization for Pathological Image Classification2025-06-17DDS-NAS: Dynamic Data Selection within Neural Architecture Search via On-line Hard Example Mining applied to Image Classification2025-06-17MARCO: Hardware-Aware Neural Architecture Search for Edge Devices with Multi-Agent Reinforcement Learning and Conformal Prediction Filtering2025-06-16Finding Optimal Kernel Size and Dimension in Convolutional Neural Networks An Architecture Optimization Approach2025-06-16Directed Acyclic Graph Convolutional Networks2025-06-13