HRank: Filter Pruning using High-Rank Feature Map
- URL: http://arxiv.org/abs/2002.10179v2
- Date: Mon, 16 Mar 2020 23:50:13 GMT
- Title: HRank: Filter Pruning using High-Rank Feature Map
- Authors: Mingbao Lin, Rongrong Ji, Yan Wang, Yichen Zhang, Baochang Zhang,
Yonghong Tian, Ling Shao
- Abstract summary: We propose a novel filter pruning method by exploring the High Rank of feature maps (HRank)
Our HRank is inspired by the discovery that the average rank of multiple feature maps generated by a single filter is always the same.
Based on HRank, we develop a method that is mathematically formulated to prune filters with low-rank feature maps.
- Score: 149.86903824840752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural network pruning offers a promising prospect to facilitate deploying
deep neural networks on resource-limited devices. However, existing methods are
still challenged by the training inefficiency and labor cost in pruning
designs, due to missing theoretical guidance of non-salient network components.
In this paper, we propose a novel filter pruning method by exploring the High
Rank of feature maps (HRank). Our HRank is inspired by the discovery that the
average rank of multiple feature maps generated by a single filter is always
the same, regardless of the number of image batches CNNs receive. Based on
HRank, we develop a method that is mathematically formulated to prune filters
with low-rank feature maps. The principle behind our pruning is that low-rank
feature maps contain less information, and thus pruned results can be easily
reproduced. Besides, we experimentally show that weights with high-rank feature
maps contain more important information, such that even when a portion is not
updated, very little damage would be done to the model performance. Without
introducing any additional constraints, HRank leads to significant improvements
over the state-of-the-arts in terms of FLOPs and parameters reduction, with
similar accuracies. For example, with ResNet-110, we achieve a 58.2%-FLOPs
reduction by removing 59.2% of the parameters, with only a small loss of 0.14%
in top-1 accuracy on CIFAR-10. With Res-50, we achieve a 43.8%-FLOPs reduction
by removing 36.7% of the parameters, with only a loss of 1.17% in the top-1
accuracy on ImageNet. The codes can be available at
https://github.com/lmbxmu/HRank.
Related papers
- HRel: Filter Pruning based on High Relevance between Activation Maps and
Class Labels [11.409989603679614]
This paper proposes an Information Bottleneck theory based filter pruning method that uses a statistical measure called Mutual Information (MI)
Unlike the existing MI based pruning methods, the proposed method determines the significance of the filters purely based on their corresponding activation map's relationship with the class labels.
The proposed method shows the state-of-the-art pruning results for LeNet-5, VGG-16, ResNet-56, ResNet-110 and ResNet-50 architectures.
arXiv Detail & Related papers (2022-02-22T08:12:22Z) - Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters [151.2423480789271]
A novel pruning method, termed CLR-RNF, is proposed for filter-level network pruning.
We conduct image classification on CIFAR-10 and ImageNet to demonstrate the superiority of our CLR-RNF over the state-of-the-arts.
arXiv Detail & Related papers (2022-02-15T04:53:24Z) - SNF: Filter Pruning via Searching the Proper Number of Filters [0.0]
Filter pruning aims to remove the redundant filters and provides the possibility for the application of CNN on terminal devices.
We propose a new filter pruning method by searching the proper number of filters (SNF)
SNF is dedicated to searching for the most reasonable number of reserved filters for each layer and then pruning filters with specific criteria.
arXiv Detail & Related papers (2021-12-14T10:37:25Z) - Network Compression via Central Filter [9.585818883354449]
We propose a novel filter pruning method, Central Filter (CF), which suggests a filter is approximately equal to a set of other filters after appropriate adjustments.
CF yields state-of-the-art performance on various benchmark networks and datasets.
arXiv Detail & Related papers (2021-12-10T12:51:04Z) - Toward Compact Deep Neural Networks via Energy-Aware Pruning [2.578242050187029]
We propose a novel energy-aware pruning method that quantifies the importance of each filter in the network using nuclear-norm (NN)
We achieve competitive results with 40.4/49.8% of FLOPs and 45.9/52.9% of parameter reduction with 94.13/94.61% in the Top-1 accuracy with ResNet-56/110 on CIFAR-10.
arXiv Detail & Related papers (2021-03-19T15:33:16Z) - Data Agnostic Filter Gating for Efficient Deep Networks [72.4615632234314]
Current filter pruning methods mainly leverage feature maps to generate important scores for filters and prune those with smaller scores.
In this paper, we propose a data filter pruning method that uses an auxiliary network named Dagger module to induce pruning.
In addition, to help prune filters with certain FLOPs constraints, we leverage an explicit FLOPs-aware regularization to directly promote pruning filters toward target FLOPs.
arXiv Detail & Related papers (2020-10-28T15:26:40Z) - SCOP: Scientific Control for Reliable Neural Network Pruning [127.20073865874636]
This paper proposes a reliable neural network pruning algorithm by setting up a scientific control.
Redundant filters can be discovered in the adversarial process of different features.
Our method can reduce 57.8% parameters and 60.2% FLOPs of ResNet-101 with only 0.01% top-1 accuracy loss on ImageNet.
arXiv Detail & Related papers (2020-10-21T03:02:01Z) - ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting [105.97936163854693]
We propose ResRep, which slims down a CNN by reducing the width (number of output channels) of convolutional layers.
Inspired by the neurobiology research about the independence of remembering and forgetting, we propose to re- parameterize a CNN into the remembering parts and forgetting parts.
We equivalently merge the remembering and forgetting parts into the original architecture with narrower layers.
arXiv Detail & Related papers (2020-07-07T07:56:45Z) - Filter Sketch for Network Pruning [184.41079868885265]
We propose a novel network pruning approach by information preserving of pre-trained network weights (filters)
Our approach, referred to as FilterSketch, encodes the second-order information of pre-trained weights.
Experiments on CIFAR-10 show that FilterSketch reduces 63.3% of FLOPs and prunes 59.9% of network parameters with negligible accuracy cost.
arXiv Detail & Related papers (2020-01-23T13:57:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.