TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/FBNetV3: Joint Architecture-Recipe Search using Predictor ...

FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining

Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Bichen Wu, Zijian He, Zhen Wei, Kan Chen, Yuandong Tian, Matthew Yu, Peter Vajda, Joseph E. Gonzalez

2020-06-03CVPR 2021 1Neural Architecture Searchobject-detectionObject Detection
PaperPDFCodeCode

Abstract

Neural Architecture Search (NAS) yields state-of-the-art neural networks that outperform their best manually-designed counterparts. However, previous NAS methods search for architectures under one set of training hyper-parameters (i.e., a training recipe), overlooking superior architecture-recipe combinations. To address this, we present Neural Architecture-Recipe Search (NARS) to search both (a) architectures and (b) their corresponding training recipes, simultaneously. NARS utilizes an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking. Furthermore, to compensate for the enlarged search space, we leverage "free" architecture statistics (e.g., FLOP count) to pretrain the predictor, significantly improving its sample efficiency and prediction reliability. After training the predictor via constrained iterative optimization, we run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints, called FBNetV3. FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors. For example, FBNetV3 matches both EfficientNet and ResNeSt accuracy on ImageNet with up to 2.0x and 7.1x fewer FLOPs, respectively. Furthermore, FBNetV3 yields significant performance gains for downstream object detection tasks, improving mAP despite 18% fewer FLOPs and 34% fewer parameters than EfficientNet-based equivalents.

Results

TaskDatasetMetricValueModel
Neural Architecture SearchImageNetAccuracy82.3FBNetV3-G
Neural Architecture SearchImageNetTop-1 Error Rate17.7FBNetV3-G
Neural Architecture SearchImageNetAccuracy80.4FBNetV3-E
Neural Architecture SearchImageNetTop-1 Error Rate19.6FBNetV3-E
Neural Architecture SearchImageNetAccuracy79.6FBNetV3-C
Neural Architecture SearchImageNetTop-1 Error Rate20.4FBNetV3-C
Neural Architecture SearchImageNetAccuracy78FBNetV3-A
Neural Architecture SearchImageNetTop-1 Error Rate22FBNetV3-A
AutoMLImageNetAccuracy82.3FBNetV3-G
AutoMLImageNetTop-1 Error Rate17.7FBNetV3-G
AutoMLImageNetAccuracy80.4FBNetV3-E
AutoMLImageNetTop-1 Error Rate19.6FBNetV3-E
AutoMLImageNetAccuracy79.6FBNetV3-C
AutoMLImageNetTop-1 Error Rate20.4FBNetV3-C
AutoMLImageNetAccuracy78FBNetV3-A
AutoMLImageNetTop-1 Error Rate22FBNetV3-A

Related Papers

DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17RS-TinyNet: Stage-wise Feature Fusion Network for Detecting Tiny Objects in Remote Sensing Images2025-07-17Decoupled PROB: Decoupled Query Initialization Tasks and Objectness-Class Learning for Open World Object Detection2025-07-17Dual LiDAR-Based Traffic Movement Count Estimation at a Signalized Intersection: Deployment, Data Collection, and Preliminary Analysis2025-07-17Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15ECORE: Energy-Conscious Optimized Routing for Deep Learning Models at the Edge2025-07-08