TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/AC/DC: Alternating Compressed/DeCompressed Training of Dee...

AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks

Alexandra Peste, Eugenia Iofinova, Adrian Vladu, Dan Alistarh

2021-06-23NeurIPS 2021 12Network Pruning
PaperPDFCode(official)Code

Abstract

The increasing computational requirements of deep neural networks (DNNs) have led to significant interest in obtaining DNN models that are sparse, yet accurate. Recent work has investigated the even harder case of sparse training, where the DNN weights are, for as much as possible, already sparse to reduce computational costs during training. Existing sparse training methods are often empirical and can have lower accuracy relative to the dense baseline. In this paper, we present a general approach called Alternating Compressed/DeCompressed (AC/DC) training of DNNs, demonstrate convergence for a variant of the algorithm, and show that AC/DC outperforms existing sparse training methods in accuracy at similar computational budgets; at high sparsity levels, AC/DC even outperforms existing methods that rely on accurate pre-trained dense models. An important property of AC/DC is that it allows co-training of dense and sparse models, yielding accurate sparse-dense model pairs at the end of the training process. This is useful in practice, where compressed variants may be desirable for deployment in resource-constrained settings without re-doing the entire training flow, and also provides us with insights into the accuracy gap between dense and compressed models. The code is available at: https://github.com/IST-DASLab/ACDC .

Results

TaskDatasetMetricValueModel
Network PruningImageNet - ResNet 50 - 90% sparsityTop-1 Accuracy75.64AC/DC
Network PruningImageNetAccuracy73.14ResNet50
Network PruningCIFAR-100Accuracy79Dense
Network PruningCIFAR-100Accuracy78.2AC/DC

Related Papers

Hyperpruning: Efficient Search through Pruned Variants of Recurrent Neural Networks Leveraging Lyapunov Spectrum2025-06-09TSENOR: Highly-Efficient Algorithm for Finding Transposable N:M Sparse Masks2025-05-29Hierarchical Safety Realignment: Lightweight Restoration of Safety in Pruned Large Vision-Language Models2025-05-22Adaptive Pruning of Deep Neural Networks for Resource-Aware Embedded Intrusion Detection on the Edge2025-05-20Bi-LSTM based Multi-Agent DRL with Computation-aware Pruning for Agent Twins Migration in Vehicular Embodied AI Networks2025-05-09Guiding Evolutionary AutoEncoder Training with Activation-Based Pruning Operators2025-05-08ReplaceMe: Network Simplification via Layer Pruning and Linear Transformations2025-05-05Optimization over Trained (and Sparse) Neural Networks: A Surrogate within a Surrogate2025-05-04