TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Quantisation and Pruning for Neural Network Compression an...

Quantisation and Pruning for Neural Network Compression and Regularisation

Kimessha Paupamah, Steven James, Richard Klein

2020-01-14Network PruningNeural Network Compression
PaperPDFCode

Abstract

Deep neural networks are typically too computationally expensive to run in real-time on consumer-grade hardware and low-powered devices. In this paper, we investigate reducing the computational and memory requirements of neural networks through network pruning and quantisation. We examine their efficacy on large networks like AlexNet compared to recent compact architectures: ShuffleNet and MobileNet. Our results show that pruning and quantisation compresses these networks to less than half their original size and improves their efficiency, particularly on MobileNet with a 7x speedup. We also demonstrate that pruning, in addition to reducing the number of parameters in a network, can aid in the correction of overfitting.

Results

TaskDatasetMetricValueModel
Model CompressionCIFAR-10Size (MB)1.9ShuffleNet – Quantised
Model CompressionCIFAR-10Size (MB)2.9MobileNet – Quantised
Model CompressionCIFAR-10Size (MB)54.6AlexNet – Quantised
Network PruningCIFAR-10Inference Time (ms)4.74MobileNet – Quantised
Network PruningCIFAR-10Inference Time (ms)5.23AlexNet – Quantised
Network PruningCIFAR-10Inference Time (ms)23.15ShuffleNet – Quantised
2D ClassificationCIFAR-10Size (MB)1.9ShuffleNet – Quantised
2D ClassificationCIFAR-10Size (MB)2.9MobileNet – Quantised
2D ClassificationCIFAR-10Size (MB)54.6AlexNet – Quantised

Related Papers

Linearity-based neural network compression2025-06-26Hyperpruning: Efficient Search through Pruned Variants of Recurrent Neural Networks Leveraging Lyapunov Spectrum2025-06-09MUC-G4: Minimal Unsat Core-Guided Incremental Verification for Deep Neural Network Compression2025-06-03TSENOR: Highly-Efficient Algorithm for Finding Transposable N:M Sparse Masks2025-05-29Hierarchical Safety Realignment: Lightweight Restoration of Safety in Pruned Large Vision-Language Models2025-05-22Is Quantum Optimization Ready? An Effort Towards Neural Network Compression using Adiabatic Quantum Computing2025-05-22Certified Neural Approximations of Nonlinear Dynamics2025-05-21Adaptive Pruning of Deep Neural Networks for Resource-Aware Embedded Intrusion Detection on the Edge2025-05-20