TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DARTS-: Robustly Stepping out of Performance Collapse With...

DARTS-: Robustly Stepping out of Performance Collapse Without Indicators

Xiangxiang Chu, Xiaoxing Wang, Bo Zhang, Shun Lu, Xiaolin Wei, Junchi Yan

2020-09-02ICLR 2021 1AutoMLNeural Architecture Search
PaperPDFCode(official)

Abstract

Despite the fast development of differentiable architecture search (DARTS), it suffers from long-standing performance instability, which extremely limits its application. Existing robustifying methods draw clues from the resulting deteriorated behavior instead of finding out its causing factor. Various indicators such as Hessian eigenvalues are proposed as a signal to stop searching before the performance collapses. However, these indicator-based methods tend to easily reject good architectures if the thresholds are inappropriately set, let alone the searching is intrinsically noisy. In this paper, we undertake a more subtle and direct approach to resolve the collapse. We first demonstrate that skip connections have a clear advantage over other candidate operations, where it can easily recover from a disadvantageous state and become dominant. We conjecture that this privilege is causing degenerated performance. Therefore, we propose to factor out this benefit with an auxiliary skip connection, ensuring a fairer competition for all operations. We call this approach DARTS-. Extensive experiments on various datasets verify that it can substantially improve robustness. Our code is available at https://github.com/Meituan-AutoML/DARTS- .

Results

TaskDatasetMetricValueModel
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Accuracy (Test)45.12DARTS-
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Accuracy (Val)44.87DARTS-
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Test)93.8DARTS-
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Val)91.03DARTS-
Neural Architecture SearchNAS-Bench-201, CIFAR-10Search time (s)11520DARTS-
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Test)71.53DARTS-
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Val)71.36DARTS-
AutoMLNAS-Bench-201, ImageNet-16-120Accuracy (Test)45.12DARTS-
AutoMLNAS-Bench-201, ImageNet-16-120Accuracy (Val)44.87DARTS-
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Test)93.8DARTS-
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Val)91.03DARTS-
AutoMLNAS-Bench-201, CIFAR-10Search time (s)11520DARTS-
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Test)71.53DARTS-
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Val)71.36DARTS-

Related Papers

DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17Imbalanced Regression Pipeline Recommendation2025-07-16Optimising 4th-Order Runge-Kutta Methods: A Dynamic Heuristic Approach for Efficiency and Low Storage2025-06-26Multimodal Representation Learning and Fusion2025-06-25Overtuning in Hyperparameter Optimization2025-06-24AnalogNAS-Bench: A NAS Benchmark for Analog In-Memory Computing2025-06-23From Tiny Machine Learning to Tiny Deep Learning: A Survey2025-06-21One-Shot Neural Architecture Search with Network Similarity Directed Initialization for Pathological Image Classification2025-06-17