REPrune: Filter Pruning via Representative Election
- URL: http://arxiv.org/abs/2007.06932v3
- Date: Tue, 21 Jul 2020 08:07:33 GMT
- Title: REPrune: Filter Pruning via Representative Election
- Authors: Mincheol Park, Woojeong Kim, Suhyun Kim
- Abstract summary: "REPrune" is a novel filter pruning method that selects representative filters via clustering.
It reduces more than 49% FLOPs, with 0.53% accuracy gain on ResNet-110 for CIFAR-10.
Also, REPrune reduces more than 41.8% FLOPs with 1.67% Top-1 validation loss on ResNet-18 for ImageNet.
- Score: 3.867363075280544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Even though norm-based filter pruning methods are widely accepted, it is
questionable whether the "smaller-norm-less-important" criterion is optimal in
determining filters to prune. Especially when we can keep only a small fraction
of the original filters, it is more crucial to choose the filters that can best
represent the whole filters regardless of norm values. Our novel pruning method
entitled "REPrune" addresses this problem by selecting representative filters
via clustering. By selecting one filter from a cluster of similar filters and
avoiding selecting adjacent large filters, REPrune can achieve a better
compression rate with similar accuracy. Our method also recovers the accuracy
more rapidly and requires a smaller shift of filters during fine-tuning.
Empirically, REPrune reduces more than 49% FLOPs, with 0.53% accuracy gain on
ResNet-110 for CIFAR-10. Also, REPrune reduces more than 41.8% FLOPs with 1.67%
Top-1 validation loss on ResNet-18 for ImageNet.
Related papers
- Efficient CNNs via Passive Filter Pruning [23.661189257759535]
Convolutional neural networks (CNNs) have shown state-of-the-art performance in various applications.
CNNs are resource-hungry due to their requirement of high computational complexity and memory storage.
Recent efforts toward achieving computational efficiency in CNNs involve filter pruning methods.
arXiv Detail & Related papers (2023-04-05T09:19:19Z) - Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters [151.2423480789271]
A novel pruning method, termed CLR-RNF, is proposed for filter-level network pruning.
We conduct image classification on CIFAR-10 and ImageNet to demonstrate the superiority of our CLR-RNF over the state-of-the-arts.
arXiv Detail & Related papers (2022-02-15T04:53:24Z) - SNF: Filter Pruning via Searching the Proper Number of Filters [0.0]
Filter pruning aims to remove the redundant filters and provides the possibility for the application of CNN on terminal devices.
We propose a new filter pruning method by searching the proper number of filters (SNF)
SNF is dedicated to searching for the most reasonable number of reserved filters for each layer and then pruning filters with specific criteria.
arXiv Detail & Related papers (2021-12-14T10:37:25Z) - Training Compact CNNs for Image Classification using Dynamic-coded
Filter Fusion [139.71852076031962]
We present a novel filter pruning method, dubbed dynamic-coded filter fusion (DCFF)
We derive compact CNNs in a computation-economical and regularization-free manner for efficient image classification.
Our DCFF derives a compact VGGNet-16 with only 72.77M FLOPs and 1.06M parameters while reaching top-1 accuracy of 93.47%.
arXiv Detail & Related papers (2021-07-14T18:07:38Z) - Data Agnostic Filter Gating for Efficient Deep Networks [72.4615632234314]
Current filter pruning methods mainly leverage feature maps to generate important scores for filters and prune those with smaller scores.
In this paper, we propose a data filter pruning method that uses an auxiliary network named Dagger module to induce pruning.
In addition, to help prune filters with certain FLOPs constraints, we leverage an explicit FLOPs-aware regularization to directly promote pruning filters toward target FLOPs.
arXiv Detail & Related papers (2020-10-28T15:26:40Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z) - Filter Grafting for Deep Neural Networks: Reason, Method, and
Cultivation [86.91324735966766]
Filter is the key component in modern convolutional neural networks (CNNs)
In this paper, we introduce filter grafting (textbfMethod) to achieve this goal.
We develop a novel criterion to measure the information of filters and an adaptive weighting strategy to balance the grafted information among networks.
arXiv Detail & Related papers (2020-04-26T08:36:26Z) - Convolutional Neural Network Pruning Using Filter Attenuation [10.282782377635106]
Filters are essential elements in convolutional neural networks (CNNs)
In filter pruning methods, a filter with all of its components, including channels and connections, are removed.
We propose a CNN pruning method based on filter attenuation in which weak filters are not directly removed.
arXiv Detail & Related papers (2020-02-09T06:31:24Z) - Filter Grafting for Deep Neural Networks [71.39169475500324]
Filter grafting aims to improve the representation capability of Deep Neural Networks (DNNs)
We develop an entropy-based criterion to measure the information of filters and an adaptive weighting strategy for balancing the grafted information among networks.
For example, the grafted MobileNetV2 outperforms the non-grafted MobileNetV2 by about 7 percent on CIFAR-100 dataset.
arXiv Detail & Related papers (2020-01-15T03:18:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.