TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Cyclic Differentiable Architecture Search

Cyclic Differentiable Architecture Search

Hongyuan Yu, Houwen Peng, Yan Huang, Jianlong Fu, Hao Du, Liang Wang, Haibin Ling

2020-06-18Neural Architecture Search
PaperPDFCodeCode(official)Code(official)

Abstract

Differentiable ARchiTecture Search, i.e., DARTS, has drawn great attention in neural architecture search. It tries to find the optimal architecture in a shallow search network and then measures its performance in a deep evaluation network. The independent optimization of the search and evaluation networks, however, leaves room for potential improvement by allowing interaction between the two networks. To address the problematic optimization issue, we propose new joint optimization objectives and a novel Cyclic Differentiable ARchiTecture Search framework, dubbed CDARTS. Considering the structure difference, CDARTS builds a cyclic feedback mechanism between the search and evaluation networks with introspective distillation. First, the search network generates an initial architecture for evaluation, and the weights of the evaluation network are optimized. Second, the architecture weights in the search network are further optimized by the label supervision in classification, as well as the regularization from the evaluation network through feature distillation. Repeating the above cycle results in joint optimization of the search and evaluation networks and thus enables the evolution of the architecture to fit the final evaluation network. The experiments and analysis on CIFAR, ImageNet and NAS-Bench-201 demonstrate the effectiveness of the proposed approach over the state-of-the-art ones. Specifically, in the DARTS search space, we achieve 97.52% top-1 accuracy on CIFAR10 and 76.3% top-1 accuracy on ImageNet. In the chain-structured search space, we achieve 78.2% top-1 accuracy on ImageNet, which is 1.1% higher than EfficientNet-B0. Our code and models are publicly available at https://github.com/microsoft/Cream.

Results

TaskDatasetMetricValueModel
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Accuracy (Test)45.51CDARTS
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Test)94.02CDARTS
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Val)91.12CDARTS
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Test)71.92CDARTS
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Val)72.12CDARTS
AutoMLNAS-Bench-201, ImageNet-16-120Accuracy (Test)45.51CDARTS
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Test)94.02CDARTS
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Val)91.12CDARTS
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Test)71.92CDARTS
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Val)72.12CDARTS

Related Papers

DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17AnalogNAS-Bench: A NAS Benchmark for Analog In-Memory Computing2025-06-23From Tiny Machine Learning to Tiny Deep Learning: A Survey2025-06-21One-Shot Neural Architecture Search with Network Similarity Directed Initialization for Pathological Image Classification2025-06-17DDS-NAS: Dynamic Data Selection within Neural Architecture Search via On-line Hard Example Mining applied to Image Classification2025-06-17MARCO: Hardware-Aware Neural Architecture Search for Edge Devices with Multi-Agent Reinforcement Learning and Conformal Prediction Filtering2025-06-16Finding Optimal Kernel Size and Dimension in Convolutional Neural Networks An Architecture Optimization Approach2025-06-16Directed Acyclic Graph Convolutional Networks2025-06-13