Network Compression via Central Filter
- URL: http://arxiv.org/abs/2112.05493v2
- Date: Mon, 13 Dec 2021 05:23:18 GMT
- Title: Network Compression via Central Filter
- Authors: Yuanzhi Duan, Xiaofang Hu, Yue Zhou, Qiang Liu, Shukai Duan
- Abstract summary: We propose a novel filter pruning method, Central Filter (CF), which suggests a filter is approximately equal to a set of other filters after appropriate adjustments.
CF yields state-of-the-art performance on various benchmark networks and datasets.
- Score: 9.585818883354449
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural network pruning has remarkable performance for reducing the complexity
of deep network models. Recent network pruning methods usually focused on
removing unimportant or redundant filters in the network. In this paper, by
exploring the similarities between feature maps, we propose a novel filter
pruning method, Central Filter (CF), which suggests that a filter is
approximately equal to a set of other filters after appropriate adjustments.
Our method is based on the discovery that the average similarity between
feature maps changes very little, regardless of the number of input images.
Based on this finding, we establish similarity graphs on feature maps and
calculate the closeness centrality of each node to select the Central Filter.
Moreover, we design a method to directly adjust weights in the next layer
corresponding to the Central Filter, effectively minimizing the error caused by
pruning. Through experiments on various benchmark networks and datasets, CF
yields state-of-the-art performance. For example, with ResNet-56, CF reduces
approximately 39.7% of FLOPs by removing 47.1% of the parameters, with even
0.33% accuracy improvement on CIFAR-10. With GoogLeNet, CF reduces
approximately 63.2% of FLOPs by removing 55.6% of the parameters, with only a
small loss of 0.35% in top-1 accuracy on CIFAR-10. With ResNet-50, CF reduces
approximately 47.9% of FLOPs by removing 36.9% of the parameters, with only a
small loss of 1.07% in top-1 accuracy on ImageNet. The codes can be available
at https://github.com/8ubpshLR23/Central-Filter.
Related papers
- End-to-End Sensitivity-Based Filter Pruning [49.61707925611295]
We present a sensitivity-based filter pruning algorithm (SbF-Pruner) to learn the importance scores of filters of each layer end-to-end.
Our method learns the scores from the filter weights, enabling it to account for the correlations between the filters of each layer.
arXiv Detail & Related papers (2022-04-15T10:21:05Z) - Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters [151.2423480789271]
A novel pruning method, termed CLR-RNF, is proposed for filter-level network pruning.
We conduct image classification on CIFAR-10 and ImageNet to demonstrate the superiority of our CLR-RNF over the state-of-the-arts.
arXiv Detail & Related papers (2022-02-15T04:53:24Z) - SNF: Filter Pruning via Searching the Proper Number of Filters [0.0]
Filter pruning aims to remove the redundant filters and provides the possibility for the application of CNN on terminal devices.
We propose a new filter pruning method by searching the proper number of filters (SNF)
SNF is dedicated to searching for the most reasonable number of reserved filters for each layer and then pruning filters with specific criteria.
arXiv Detail & Related papers (2021-12-14T10:37:25Z) - CHIP: CHannel Independence-based Pruning for Compact Neural Networks [13.868303041084431]
Filter pruning has been widely used for neural network compression because of its enabled practical acceleration.
We propose to perform efficient filter pruning using Channel Independence, a metric that measures the correlations among different feature maps.
arXiv Detail & Related papers (2021-10-26T19:35:56Z) - Training Compact CNNs for Image Classification using Dynamic-coded
Filter Fusion [139.71852076031962]
We present a novel filter pruning method, dubbed dynamic-coded filter fusion (DCFF)
We derive compact CNNs in a computation-economical and regularization-free manner for efficient image classification.
Our DCFF derives a compact VGGNet-16 with only 72.77M FLOPs and 1.06M parameters while reaching top-1 accuracy of 93.47%.
arXiv Detail & Related papers (2021-07-14T18:07:38Z) - Data Agnostic Filter Gating for Efficient Deep Networks [72.4615632234314]
Current filter pruning methods mainly leverage feature maps to generate important scores for filters and prune those with smaller scores.
In this paper, we propose a data filter pruning method that uses an auxiliary network named Dagger module to induce pruning.
In addition, to help prune filters with certain FLOPs constraints, we leverage an explicit FLOPs-aware regularization to directly promote pruning filters toward target FLOPs.
arXiv Detail & Related papers (2020-10-28T15:26:40Z) - HRank: Filter Pruning using High-Rank Feature Map [149.86903824840752]
We propose a novel filter pruning method by exploring the High Rank of feature maps (HRank)
Our HRank is inspired by the discovery that the average rank of multiple feature maps generated by a single filter is always the same.
Based on HRank, we develop a method that is mathematically formulated to prune filters with low-rank feature maps.
arXiv Detail & Related papers (2020-02-24T11:50:09Z) - Filter Sketch for Network Pruning [184.41079868885265]
We propose a novel network pruning approach by information preserving of pre-trained network weights (filters)
Our approach, referred to as FilterSketch, encodes the second-order information of pre-trained weights.
Experiments on CIFAR-10 show that FilterSketch reduces 63.3% of FLOPs and prunes 59.9% of network parameters with negligible accuracy cost.
arXiv Detail & Related papers (2020-01-23T13:57:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.