TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LLM.int8(): 8-bit Matrix Multiplication for Transformers a...

LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale

Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer

2022-08-15Sentiment AnalysisQuantizationNatural Language InferenceSemantic Textual SimilarityLinguistic AcceptabilityLanguage Modelling
PaperPDFCode(official)CodeCodeCode

Abstract

Large language models have been widely adopted but require significant GPU memory for inference. We develop a procedure for Int8 matrix multiplication for feed-forward and attention projection layers in transformers, which cut the memory needed for inference by half while retaining full precision performance. With our method, a 175B parameter 16/32-bit checkpoint can be loaded, converted to Int8, and used immediately without performance degradation. This is made possible by understanding and working around properties of highly systematic emergent features in transformer language models that dominate attention and transformer predictive performance. To cope with these features, we develop a two-part quantization procedure, LLM.int8(). We first use vector-wise quantization with separate normalization constants for each inner product in the matrix multiplication, to quantize most of the features. However, for the emergent outliers, we also include a new mixed-precision decomposition scheme, which isolates the outlier feature dimensions into a 16-bit matrix multiplication while still more than 99.9% of values are multiplied in 8-bit. Using LLM.int8(), we show empirically it is possible to perform inference in LLMs with up to 175B parameters without any performance degradation. This result makes such models much more accessible, for example making it possible to use OPT-175B/BLOOM on a single server with consumer GPUs. We open-source our software.

Results

TaskDatasetMetricValueModel
Natural Language InferenceMultiNLIMatched90.2RoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)
Language ModellingC4Perplexity12.45Zeropoint LLM.int8 13B (vector-wise + decomp)
Language ModellingC4Perplexity13.3LLM.float32 6.7B
Language ModellingC4Perplexity14.43LLM.float32 2.7B
Language ModellingC4Perplexity15.91LLM.float32 1.3B
Semantic Textual SimilaritySTS BenchmarkPearson Correlation0.919RoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)
Sentiment AnalysisSST-2 Binary classificationAccuracy96.4RoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)

Related Papers

Efficient Deployment of Spiking Neural Networks on SpiNNaker2 for DVS Gesture Recognition Using Neuromorphic Intermediate Representation2025-09-04Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21An End-to-End DNN Inference Framework for the SpiNNaker2 Neuromorphic MPSoC2025-07-18AdaptiSent: Context-Aware Adaptive Attention for Multimodal Aspect-Based Sentiment Analysis2025-07-17Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17Angle Estimation of a Single Source with Massive Uniform Circular Arrays2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17