SNF: Filter Pruning via Searching the Proper Number of Filters
- URL: http://arxiv.org/abs/2112.07282v1
- Date: Tue, 14 Dec 2021 10:37:25 GMT
- Title: SNF: Filter Pruning via Searching the Proper Number of Filters
- Authors: Pengkun Liu, Yaru Yue, Yanjun Guo, Xingxiang Tao, Xiaoguang Zhou
- Abstract summary: Filter pruning aims to remove the redundant filters and provides the possibility for the application of CNN on terminal devices.
We propose a new filter pruning method by searching the proper number of filters (SNF)
SNF is dedicated to searching for the most reasonable number of reserved filters for each layer and then pruning filters with specific criteria.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional Neural Network (CNN) has an amount of parameter redundancy,
filter pruning aims to remove the redundant filters and provides the
possibility for the application of CNN on terminal devices. However, previous
works pay more attention to designing evaluation criteria of filter importance
and then prune less important filters with a fixed pruning rate or a fixed
number to reduce convolutional neural networks' redundancy. It does not
consider how many filters to reserve for each layer is the most reasonable
choice. From this perspective, we propose a new filter pruning method by
searching the proper number of filters (SNF). SNF is dedicated to searching for
the most reasonable number of reserved filters for each layer and then pruning
filters with specific criteria. It can tailor the most suitable network
structure at different FLOPs. Filter pruning with our method leads to the
state-of-the-art (SOTA) accuracy on CIFAR-10 and achieves competitive
performance on ImageNet ILSVRC-2012.SNF based on the ResNet-56 network achieves
an increase of 0.14% in Top-1 accuracy at 52.94% FLOPs reduction on CIFAR-10.
Pruning ResNet-110 on CIFAR-10 also improves the Top-1 accuracy of 0.03% when
reducing 68.68% FLOPs. For ImageNet, we set the pruning rates as 52.10% FLOPs,
and the Top-1 accuracy only has a drop of 0.74%. The codes can be available at
https://github.com/pk-l/SNF.
Related papers
- Pruning by Active Attention Manipulation [49.61707925611295]
Filter pruning of a CNN is typically achieved by applying discrete masks on the CNN's filter weights or activation maps, post-training.
Here, we present a new filter-importance-scoring concept named pruning by active attention manipulation (PAAM)
PAAM learns analog filter scores from the filter weights by optimizing a cost function regularized by an additive term in the scores.
arXiv Detail & Related papers (2022-10-20T09:17:02Z) - End-to-End Sensitivity-Based Filter Pruning [49.61707925611295]
We present a sensitivity-based filter pruning algorithm (SbF-Pruner) to learn the importance scores of filters of each layer end-to-end.
Our method learns the scores from the filter weights, enabling it to account for the correlations between the filters of each layer.
arXiv Detail & Related papers (2022-04-15T10:21:05Z) - Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters [151.2423480789271]
A novel pruning method, termed CLR-RNF, is proposed for filter-level network pruning.
We conduct image classification on CIFAR-10 and ImageNet to demonstrate the superiority of our CLR-RNF over the state-of-the-arts.
arXiv Detail & Related papers (2022-02-15T04:53:24Z) - Network Compression via Central Filter [9.585818883354449]
We propose a novel filter pruning method, Central Filter (CF), which suggests a filter is approximately equal to a set of other filters after appropriate adjustments.
CF yields state-of-the-art performance on various benchmark networks and datasets.
arXiv Detail & Related papers (2021-12-10T12:51:04Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Training Compact CNNs for Image Classification using Dynamic-coded
Filter Fusion [139.71852076031962]
We present a novel filter pruning method, dubbed dynamic-coded filter fusion (DCFF)
We derive compact CNNs in a computation-economical and regularization-free manner for efficient image classification.
Our DCFF derives a compact VGGNet-16 with only 72.77M FLOPs and 1.06M parameters while reaching top-1 accuracy of 93.47%.
arXiv Detail & Related papers (2021-07-14T18:07:38Z) - Deep Model Compression based on the Training History [13.916984628784768]
We propose a novel History Based Filter Pruning (HBFP) method that utilizes network training history for filter pruning.
The proposed pruning method outperforms the state-of-the-art in terms of FLOPs reduction (floating-point operations) by 97.98%, 83.42%, 78.43%, and 74.95% for LeNet-5, VGG-16, ResNet-56, and ResNet-110 models, respectively.
arXiv Detail & Related papers (2021-01-30T06:04:21Z) - Data Agnostic Filter Gating for Efficient Deep Networks [72.4615632234314]
Current filter pruning methods mainly leverage feature maps to generate important scores for filters and prune those with smaller scores.
In this paper, we propose a data filter pruning method that uses an auxiliary network named Dagger module to induce pruning.
In addition, to help prune filters with certain FLOPs constraints, we leverage an explicit FLOPs-aware regularization to directly promote pruning filters toward target FLOPs.
arXiv Detail & Related papers (2020-10-28T15:26:40Z) - REPrune: Filter Pruning via Representative Election [3.867363075280544]
"REPrune" is a novel filter pruning method that selects representative filters via clustering.
It reduces more than 49% FLOPs, with 0.53% accuracy gain on ResNet-110 for CIFAR-10.
Also, REPrune reduces more than 41.8% FLOPs with 1.67% Top-1 validation loss on ResNet-18 for ImageNet.
arXiv Detail & Related papers (2020-07-14T09:41:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.