TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/HRank: Filter Pruning using High-Rank Feature Map

HRank: Filter Pruning using High-Rank Feature Map

Mingbao Lin, Rongrong Ji, Yan Wang, Yichen Zhang, Baochang Zhang, Yonghong Tian, Ling Shao

2020-02-24CVPR 2020 6Vocal Bursts Intensity PredictionNetwork Pruning
PaperPDFCode(official)Code

Abstract

Neural network pruning offers a promising prospect to facilitate deploying deep neural networks on resource-limited devices. However, existing methods are still challenged by the training inefficiency and labor cost in pruning designs, due to missing theoretical guidance of non-salient network components. In this paper, we propose a novel filter pruning method by exploring the High Rank of feature maps (HRank). Our HRank is inspired by the discovery that the average rank of multiple feature maps generated by a single filter is always the same, regardless of the number of image batches CNNs receive. Based on HRank, we develop a method that is mathematically formulated to prune filters with low-rank feature maps. The principle behind our pruning is that low-rank feature maps contain less information, and thus pruned results can be easily reproduced. Besides, we experimentally show that weights with high-rank feature maps contain more important information, such that even when a portion is not updated, very little damage would be done to the model performance. Without introducing any additional constraints, HRank leads to significant improvements over the state-of-the-arts in terms of FLOPs and parameters reduction, with similar accuracies. For example, with ResNet-110, we achieve a 58.2%-FLOPs reduction by removing 59.2% of the parameters, with only a small loss of 0.14% in top-1 accuracy on CIFAR-10. With Res-50, we achieve a 43.8%-FLOPs reduction by removing 36.7% of the parameters, with only a loss of 1.17% in the top-1 accuracy on ImageNet. The codes can be available at https://github.com/lmbxmu/HRank.

Related Papers

Hyperpruning: Efficient Search through Pruned Variants of Recurrent Neural Networks Leveraging Lyapunov Spectrum2025-06-09TSENOR: Highly-Efficient Algorithm for Finding Transposable N:M Sparse Masks2025-05-29Hierarchical Safety Realignment: Lightweight Restoration of Safety in Pruned Large Vision-Language Models2025-05-22Adaptive Pruning of Deep Neural Networks for Resource-Aware Embedded Intrusion Detection on the Edge2025-05-20Bi-LSTM based Multi-Agent DRL with Computation-aware Pruning for Agent Twins Migration in Vehicular Embodied AI Networks2025-05-09Guiding Evolutionary AutoEncoder Training with Activation-Based Pruning Operators2025-05-08ReplaceMe: Network Simplification via Layer Pruning and Linear Transformations2025-05-05Optimization over Trained (and Sparse) Neural Networks: A Surrogate within a Surrogate2025-05-04