TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/FBNetV2: Differentiable Neural Architecture Search for Spa...

FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions

Alvin Wan, Xiaoliang Dai, Peizhao Zhang, Zijian He, Yuandong Tian, Saining Xie, Bichen Wu, Matthew Yu, Tao Xu, Kan Chen, Peter Vajda, Joseph E. Gonzalez

2020-04-12CVPR 2020 6Neural Architecture Search
PaperPDFCode(official)

Abstract

Differentiable Neural Architecture Search (DNAS) has demonstrated great success in designing state-of-the-art, efficient neural networks. However, DARTS-based DNAS's search space is small when compared to other search methods', since all candidate network layers must be explicitly instantiated in memory. To address this bottleneck, we propose a memory and computationally efficient DNAS variant: DMaskingNAS. This algorithm expands the search space by up to $10^{14}\times$ over conventional DNAS, supporting searches over spatial and channel dimensions that are otherwise prohibitively expensive: input resolution and number of filters. We propose a masking mechanism for feature map reuse, so that memory and computational costs stay nearly constant as the search space expands. Furthermore, we employ effective shape propagation to maximize per-FLOP or per-parameter accuracy. The searched FBNetV2s yield state-of-the-art performance when compared with all previous architectures. With up to 421$\times$ less search cost, DMaskingNAS finds models with 0.9% higher accuracy, 15% fewer FLOPs than MobileNetV3-Small; and with similar accuracy but 20% fewer FLOPs than Efficient-B0. Furthermore, our FBNetV2 outperforms MobileNetV3 by 2.6% in accuracy, with equivalent model size. FBNetV2 models are open-sourced at https://github.com/facebookresearch/mobile-vision.

Results

TaskDatasetMetricValueModel
Neural Architecture SearchImageNetAccuracy77.2FBNetV2-L1
Neural Architecture SearchImageNetTop-1 Error Rate22.8FBNetV2-L1
Neural Architecture SearchImageNetAccuracy76FBNetV2-F4
Neural Architecture SearchImageNetTop-1 Error Rate24FBNetV2-F4
Neural Architecture SearchImageNetAccuracy73.2FBNetV2-F3
Neural Architecture SearchImageNetTop-1 Error Rate26.8FBNetV2-F3
Neural Architecture SearchImageNetAccuracy68.3FBNetV2-F1
Neural Architecture SearchImageNetTop-1 Error Rate31.7FBNetV2-F1
AutoMLImageNetAccuracy77.2FBNetV2-L1
AutoMLImageNetTop-1 Error Rate22.8FBNetV2-L1
AutoMLImageNetAccuracy76FBNetV2-F4
AutoMLImageNetTop-1 Error Rate24FBNetV2-F4
AutoMLImageNetAccuracy73.2FBNetV2-F3
AutoMLImageNetTop-1 Error Rate26.8FBNetV2-F3
AutoMLImageNetAccuracy68.3FBNetV2-F1
AutoMLImageNetTop-1 Error Rate31.7FBNetV2-F1

Related Papers

DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17AnalogNAS-Bench: A NAS Benchmark for Analog In-Memory Computing2025-06-23From Tiny Machine Learning to Tiny Deep Learning: A Survey2025-06-21One-Shot Neural Architecture Search with Network Similarity Directed Initialization for Pathological Image Classification2025-06-17DDS-NAS: Dynamic Data Selection within Neural Architecture Search via On-line Hard Example Mining applied to Image Classification2025-06-17MARCO: Hardware-Aware Neural Architecture Search for Edge Devices with Multi-Agent Reinforcement Learning and Conformal Prediction Filtering2025-06-16Finding Optimal Kernel Size and Dimension in Convolutional Neural Networks An Architecture Optimization Approach2025-06-16Directed Acyclic Graph Convolutional Networks2025-06-13