TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ISyNet: Convolutional Neural Networks design for AI accele...

ISyNet: Convolutional Neural Networks design for AI accelerator

Alexey Letunovskiy, Vladimir Korviakov, Vladimir Polovnikov, Anastasiia Kargapoltseva, Ivan Mazurenko, Yepan Xiong

2021-09-04Image Classificationspeech-recognitionNeural Architecture SearchObject Detection
PaperPDFCodeCodeCodeCodeCodeCodeCodeCode(official)Code

Abstract

In recent years Deep Learning reached significant results in many practical problems, such as computer vision, natural language processing, speech recognition and many others. For many years the main goal of the research was to improve the quality of models, even if the complexity was impractically high. However, for the production solutions, which often require real-time work, the latency of the model plays a very important role. Current state-of-the-art architectures are found with neural architecture search (NAS) taking model complexity into account. However, designing of the search space suitable for specific hardware is still a challenging task. To address this problem we propose a measure of hardware efficiency of neural architecture search space - matrix efficiency measure (MEM); a search space comprising of hardware-efficient operations; a latency-aware scaling method; and ISyNet - a set of architectures designed to be fast on the specialized neural processing unit (NPU) hardware and accurate at the same time. We show the advantage of the designed architectures for the NPU devices on ImageNet and the generalization ability for the downstream classification and detection tasks.

Results

TaskDatasetMetricValueModel
Neural Architecture SearchImageNetTop-1 Error Rate19.84ISyNet-N3
Neural Architecture SearchImageNetTop-1 Error Rate20.59ISyNet-N2
Neural Architecture SearchImageNetTop-1 Error Rate21.45ISyNet-N1-S3
Neural Architecture SearchImageNetTop-1 Error Rate22.15ISyNet-N1-S2
Neural Architecture SearchImageNetTop-1 Error Rate22.7ISyNet-N1-S1
Neural Architecture SearchImageNetTop-1 Error Rate23.16ISyNet-N1
Neural Architecture SearchImageNetTop-1 Error Rate24.55ISyNet-N0
AutoMLImageNetTop-1 Error Rate19.84ISyNet-N3
AutoMLImageNetTop-1 Error Rate20.59ISyNet-N2
AutoMLImageNetTop-1 Error Rate21.45ISyNet-N1-S3
AutoMLImageNetTop-1 Error Rate22.15ISyNet-N1-S2
AutoMLImageNetTop-1 Error Rate22.7ISyNet-N1-S1
AutoMLImageNetTop-1 Error Rate23.16ISyNet-N1
AutoMLImageNetTop-1 Error Rate24.55ISyNet-N0

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech2025-07-17DASViT: Differentiable Architecture Search for Vision Transformer2025-07-17