TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learned Step Size Quantization

Learned Step Size Quantization

Steven K. Esser, Jeffrey L. McKinstry, Deepika Bablani, Rathinakumar Appuswamy, Dharmendra S. Modha

2019-02-21ICLR 2020 1QuantizationModel Compression
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for training such networks, Learned Step Size Quantization, that achieves the highest accuracy to date on the ImageNet dataset when using models, from a variety of architectures, with weights and activations quantized to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach full precision baseline accuracy. Our approach builds upon existing methods for learning weights in quantized networks by improving how the quantizer itself is configured. Specifically, we introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters. This approach works using different levels of precision as needed for a given system and requires only a simple modification of existing training code.

Results

TaskDatasetMetricValueModel
Model CompressionImageNetTop-177.878ADLIK-MO-ResNet50+W4A4
Model CompressionImageNetTop-177.34ADLIK-MO-ResNet50+W3A4
QuantizationImageNetActivation bits4ADLIK-MO-ResNet50-W4A4
QuantizationImageNetTop-1 Accuracy (%)77.878ADLIK-MO-ResNet50-W4A4
QuantizationImageNetWeight bits4ADLIK-MO-ResNet50-W4A4
QuantizationImageNetActivation bits4ADLIK-MO-ResNet50-W3A4
QuantizationImageNetTop-1 Accuracy (%)77.34ADLIK-MO-ResNet50-W3A4
QuantizationImageNetWeight bits3ADLIK-MO-ResNet50-W3A4
QuantizationImageNetActivation bits4ResNet50-W4A4 (paper)
QuantizationImageNetTop-1 Accuracy (%)76.7ResNet50-W4A4 (paper)
QuantizationImageNetWeight bits4ResNet50-W4A4 (paper)

Related Papers

Efficient Deployment of Spiking Neural Networks on SpiNNaker2 for DVS Gesture Recognition Using Neuromorphic Intermediate Representation2025-09-04LINR-PCGC: Lossless Implicit Neural Representations for Point Cloud Geometry Compression2025-07-21An End-to-End DNN Inference Framework for the SpiNNaker2 Neuromorphic MPSoC2025-07-18Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17Angle Estimation of a Single Source with Massive Uniform Circular Arrays2025-07-17Quantized Rank Reduction: A Communications-Efficient Federated Learning Scheme for Network-Critical Applications2025-07-15MGVQ: Could VQ-VAE Beat VAE? A Generalizable Tokenizer with Multi-group Quantization2025-07-14Lightweight Federated Learning over Wireless Edge Networks2025-07-13