TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Methodology/Quantization

Quantization

20 benchmarks4925 papers

Quantization is a promising technique to reduce the computation cost of neural network training, which can replace high-cost floating-point numbers (e.g., float32) with low-cost fixed-point numbers (e.g., int8/int16).

<span class="description-source">Source: Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers </span>

Benchmarks

Quantization on ImageNet

Top-1 Accuracy (%)Weight bitsActivation bits

Quantization on CIFAR-100

CIFAR-100 W5A5 Top-1 AccuracyCIFAR-100 W4A4 Top-1 AccuracyCIFAR-100 W6A6 Top-1 AccuracyCIFAR-100 W8A8 Top-1 Accuracy

Quantization on CIFAR-10

MAP

Quantization on CIFAR10

CIFAR-10 W4A4 Top-1 AccuracyCIFAR-10 W5A5 Top-1 AccuracyCIFAR-10 W8A8 Top-1 AccuracyCIFAR-10 W6A6 Top-1 Accuracy

Quantization on AgeDB-30

Accuracy

Quantization on CFP-FP

Accuracy

Quantization on COCO (Common Objects in Context)

MAP

Quantization on IJB-B

TAR @ FAR=1e-4

Quantization on IJB-C

TAR @ FAR=1e-4

Quantization on Knowledge-based:

All

Quantization on LFW

Accuracy

Quantization on Wiki-40B

Perplexity