Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters
- URL: http://arxiv.org/abs/2202.07190v1
- Date: Tue, 15 Feb 2022 04:53:24 GMT
- Title: Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters
- Authors: Mingbao Lin, Liujuan Cao, Yuxin Zhang, Ling Shao, Chia-Wen Lin,
Rongrong Ji
- Abstract summary: A novel pruning method, termed CLR-RNF, is proposed for filter-level network pruning.
We conduct image classification on CIFAR-10 and ImageNet to demonstrate the superiority of our CLR-RNF over the state-of-the-arts.
- Score: 151.2423480789271
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This paper focuses on filter-level network pruning. A novel pruning method,
termed CLR-RNF, is proposed. We first reveal a "long-tail" long-tail pruning
problem in magnitude-based weight pruning methods, and then propose a
computation-aware measurement for individual weight importance, followed by a
Cross-Layer Ranking (CLR) of weights to identify and remove the bottom-ranked
weights. Consequently, the per-layer sparsity makes up of the pruned network
structure in our filter pruning. Then, we introduce a recommendation-based
filter selection scheme where each filter recommends a group of its closest
filters. To pick the preserved filters from these recommended groups, we
further devise a k-Reciprocal Nearest Filter (RNF) selection scheme where the
selected filters fall into the intersection of these recommended groups. Both
our pruned network structure and the filter selection are non-learning
processes, which thus significantly reduce the pruning complexity, and
differentiate our method from existing works. We conduct image classification
on CIFAR-10 and ImageNet to demonstrate the superiority of our CLR-RNF over the
state-of-the-arts. For example, on CIFAR-10, CLR-RNF removes 74.1% FLOPs and
95.0% parameters from VGGNet-16 with even 0.3\% accuracy improvements. On
ImageNet, it removes 70.2% FLOPs and 64.8% parameters from ResNet-50 with only
1.7% top-5 accuracy drops. Our project is at https://github.com/lmbxmu/CLR-RNF.
Related papers
- Pruning by Active Attention Manipulation [49.61707925611295]
Filter pruning of a CNN is typically achieved by applying discrete masks on the CNN's filter weights or activation maps, post-training.
Here, we present a new filter-importance-scoring concept named pruning by active attention manipulation (PAAM)
PAAM learns analog filter scores from the filter weights by optimizing a cost function regularized by an additive term in the scores.
arXiv Detail & Related papers (2022-10-20T09:17:02Z) - Asymptotic Soft Cluster Pruning for Deep Neural Networks [5.311178623385279]
Filter pruning method introduces structural sparsity by removing selected filters.
We propose a novel filter pruning method called Asymptotic Soft Cluster Pruning.
Our method can achieve competitive results compared with many state-of-the-art algorithms.
arXiv Detail & Related papers (2022-06-16T13:58:58Z) - End-to-End Sensitivity-Based Filter Pruning [49.61707925611295]
We present a sensitivity-based filter pruning algorithm (SbF-Pruner) to learn the importance scores of filters of each layer end-to-end.
Our method learns the scores from the filter weights, enabling it to account for the correlations between the filters of each layer.
arXiv Detail & Related papers (2022-04-15T10:21:05Z) - SNF: Filter Pruning via Searching the Proper Number of Filters [0.0]
Filter pruning aims to remove the redundant filters and provides the possibility for the application of CNN on terminal devices.
We propose a new filter pruning method by searching the proper number of filters (SNF)
SNF is dedicated to searching for the most reasonable number of reserved filters for each layer and then pruning filters with specific criteria.
arXiv Detail & Related papers (2021-12-14T10:37:25Z) - Training Compact CNNs for Image Classification using Dynamic-coded
Filter Fusion [139.71852076031962]
We present a novel filter pruning method, dubbed dynamic-coded filter fusion (DCFF)
We derive compact CNNs in a computation-economical and regularization-free manner for efficient image classification.
Our DCFF derives a compact VGGNet-16 with only 72.77M FLOPs and 1.06M parameters while reaching top-1 accuracy of 93.47%.
arXiv Detail & Related papers (2021-07-14T18:07:38Z) - Deep Model Compression based on the Training History [13.916984628784768]
We propose a novel History Based Filter Pruning (HBFP) method that utilizes network training history for filter pruning.
The proposed pruning method outperforms the state-of-the-art in terms of FLOPs reduction (floating-point operations) by 97.98%, 83.42%, 78.43%, and 74.95% for LeNet-5, VGG-16, ResNet-56, and ResNet-110 models, respectively.
arXiv Detail & Related papers (2021-01-30T06:04:21Z) - Data Agnostic Filter Gating for Efficient Deep Networks [72.4615632234314]
Current filter pruning methods mainly leverage feature maps to generate important scores for filters and prune those with smaller scores.
In this paper, we propose a data filter pruning method that uses an auxiliary network named Dagger module to induce pruning.
In addition, to help prune filters with certain FLOPs constraints, we leverage an explicit FLOPs-aware regularization to directly promote pruning filters toward target FLOPs.
arXiv Detail & Related papers (2020-10-28T15:26:40Z) - Filter Sketch for Network Pruning [184.41079868885265]
We propose a novel network pruning approach by information preserving of pre-trained network weights (filters)
Our approach, referred to as FilterSketch, encodes the second-order information of pre-trained weights.
Experiments on CIFAR-10 show that FilterSketch reduces 63.3% of FLOPs and prunes 59.9% of network parameters with negligible accuracy cost.
arXiv Detail & Related papers (2020-01-23T13:57:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.