TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/FQ-ViT: Post-Training Quantization for Fully Quantized Vis...

FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer

Yang Lin, Tianyu Zhang, Peiqin Sun, Zheng Li, Shuchang Zhou

2021-11-27Quantization
PaperPDFCode(official)

Abstract

Network quantization significantly reduces model inference complexity and has been widely used in real-world deployments. However, most existing quantization methods have been developed mainly on Convolutional Neural Networks (CNNs), and suffer severe degradation when applied to fully quantized vision transformers. In this work, we demonstrate that many of these difficulties arise because of serious inter-channel variation in LayerNorm inputs, and present, Power-of-Two Factor (PTF), a systematic method to reduce the performance degradation and inference complexity of fully quantized vision transformers. In addition, observing an extreme non-uniform distribution in attention maps, we propose Log-Int-Softmax (LIS) to sustain that and simplify inference by using 4-bit quantization and the BitShift operator. Comprehensive experiments on various transformer-based architectures and benchmarks show that our Fully Quantized Vision Transformer (FQ-ViT) outperforms previous works while even using lower bit-width on attention maps. For instance, we reach 84.89% top-1 accuracy with ViT-L on ImageNet and 50.8 mAP with Cascade Mask R-CNN (Swin-S) on COCO. To our knowledge, we are the first to achieve lossless accuracy degradation (~1%) on fully quantized vision transformers. The code is available at https://github.com/megvii-research/FQ-ViT.

Results

TaskDatasetMetricValueModel
QuantizationImageNetActivation bits8FQ-ViT (ViT-L)
QuantizationImageNetTop-1 Accuracy (%)85.03FQ-ViT (ViT-L)
QuantizationImageNetWeight bits8FQ-ViT (ViT-L)
QuantizationImageNetActivation bits8FQ-ViT (ViT-B)
QuantizationImageNetTop-1 Accuracy (%)83.31FQ-ViT (ViT-B)
QuantizationImageNetWeight bits8FQ-ViT (ViT-B)
QuantizationImageNetActivation bits8FQ-ViT (Swin-B)
QuantizationImageNetTop-1 Accuracy (%)82.97FQ-ViT (Swin-B)
QuantizationImageNetWeight bits8FQ-ViT (Swin-B)
QuantizationImageNetActivation bits8FQ-ViT (Swin-S)
QuantizationImageNetTop-1 Accuracy (%)82.71FQ-ViT (Swin-S)
QuantizationImageNetWeight bits8FQ-ViT (Swin-S)
QuantizationImageNetActivation bits8FQ-ViT (DeiT-B)
QuantizationImageNetTop-1 Accuracy (%)81.2FQ-ViT (DeiT-B)
QuantizationImageNetWeight bits8FQ-ViT (DeiT-B)
QuantizationImageNetActivation bits8FQ-ViT (Swin-T)
QuantizationImageNetTop-1 Accuracy (%)80.51FQ-ViT (Swin-T)
QuantizationImageNetWeight bits8FQ-ViT (Swin-T)
QuantizationImageNetActivation bits8FQ-ViT (DeiT-S)
QuantizationImageNetTop-1 Accuracy (%)79.17FQ-ViT (DeiT-S)
QuantizationImageNetWeight bits8FQ-ViT (DeiT-S)
QuantizationImageNetActivation bits8FQ-ViT (DeiT-T)
QuantizationImageNetTop-1 Accuracy (%)71.61FQ-ViT (DeiT-T)
QuantizationImageNetWeight bits8FQ-ViT (DeiT-T)

Related Papers

Efficient Deployment of Spiking Neural Networks on SpiNNaker2 for DVS Gesture Recognition Using Neuromorphic Intermediate Representation2025-09-04An End-to-End DNN Inference Framework for the SpiNNaker2 Neuromorphic MPSoC2025-07-18Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17Angle Estimation of a Single Source with Massive Uniform Circular Arrays2025-07-17Quantized Rank Reduction: A Communications-Efficient Federated Learning Scheme for Network-Critical Applications2025-07-15MGVQ: Could VQ-VAE Beat VAE? A Generalizable Tokenizer with Multi-group Quantization2025-07-14Lightweight Federated Learning over Wireless Edge Networks2025-07-13Vision Foundation Models as Effective Visual Tokenizers for Autoregressive Image Generation2025-07-11