TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Generalized Global Ranking-Aware Neural Architecture Ranke...

Generalized Global Ranking-Aware Neural Architecture Ranker for Efficient Image Classifier Search

Bicheng Guo, Tao Chen, Shibo He, Haoyu Liu, Lilin Xu, Peng Ye, Jiming Chen

2022-01-30Reinforcement LearningNeural Architecture Search
PaperPDFCode(official)

Abstract

Neural Architecture Search (NAS) is a powerful tool for automating effective image processing DNN designing. The ranking has been advocated to design an efficient performance predictor for NAS. The previous contrastive method solves the ranking problem by comparing pairs of architectures and predicting their relative performance. However, it only focuses on the rankings between two involved architectures and neglects the overall quality distributions of the search space, which may suffer generalization issues. A predictor, namely Neural Architecture Ranker (NAR) which concentrates on the global quality tier of specific architecture, is proposed to tackle such problems caused by the local perspective. The NAR explores the quality tiers of the search space globally and classifies each individual to the tier they belong to according to its global ranking. Thus, the predictor gains the knowledge of the performance distributions of the search space which helps to generalize its ranking ability to the datasets more easily. Meanwhile, the global quality distribution facilitates the search phase by directly sampling candidates according to the statistics of quality tiers, which is free of training a search algorithm, e.g., Reinforcement Learning (RL) or Evolutionary Algorithm (EA), thus it simplifies the NAS pipeline and saves the computational overheads. The proposed NAR achieves better performance than the state-of-the-art methods on two widely used datasets for NAS research. On the vast search space of NAS-Bench-101, the NAR easily finds the architecture with top 0.01$\unicode{x2030}$ performance only by sampling. It also generalizes well to different image datasets of NAS-Bench-201, i.e., CIFAR-10, CIFAR-100, and ImageNet-16-120 by identifying the optimal architectures for each of them.

Results

TaskDatasetMetricValueModel
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Accuracy (Test)46.66NAR
Neural Architecture SearchNAS-Bench-201, ImageNet-16-120Accuracy (Val)46.16NAR
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Test)94.33NAR
Neural Architecture SearchNAS-Bench-201, CIFAR-10Accuracy (Val)91.44NAR
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Test)72.89NAR
Neural Architecture SearchNAS-Bench-201, CIFAR-100Accuracy (Val)72.54NAR
AutoMLNAS-Bench-201, ImageNet-16-120Accuracy (Test)46.66NAR
AutoMLNAS-Bench-201, ImageNet-16-120Accuracy (Val)46.16NAR
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Test)94.33NAR
AutoMLNAS-Bench-201, CIFAR-10Accuracy (Val)91.44NAR
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Test)72.89NAR
AutoMLNAS-Bench-201, CIFAR-100Accuracy (Val)72.54NAR

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Autonomous Resource Management in Microservice Systems via Reinforcement Learning2025-07-17