SCOP: Scientific Control for Reliable Neural Network Pruning
- URL: http://arxiv.org/abs/2010.10732v2
- Date: Mon, 11 Jan 2021 03:06:52 GMT
- Title: SCOP: Scientific Control for Reliable Neural Network Pruning
- Authors: Yehui Tang, Yunhe Wang, Yixing Xu, Dacheng Tao, Chunjing Xu, Chao Xu,
Chang Xu
- Abstract summary: This paper proposes a reliable neural network pruning algorithm by setting up a scientific control.
Redundant filters can be discovered in the adversarial process of different features.
Our method can reduce 57.8% parameters and 60.2% FLOPs of ResNet-101 with only 0.01% top-1 accuracy loss on ImageNet.
- Score: 127.20073865874636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a reliable neural network pruning algorithm by setting up
a scientific control. Existing pruning methods have developed various
hypotheses to approximate the importance of filters to the network and then
execute filter pruning accordingly. To increase the reliability of the results,
we prefer to have a more rigorous research design by including a scientific
control group as an essential part to minimize the effect of all factors except
the association between the filter and expected network output. Acting as a
control group, knockoff feature is generated to mimic the feature map produced
by the network filter, but they are conditionally independent of the example
label given the real feature map. We theoretically suggest that the knockoff
condition can be approximately preserved given the information propagation of
network layers. Besides the real feature map on an intermediate layer, the
corresponding knockoff feature is brought in as another auxiliary input signal
for the subsequent layers. Redundant filters can be discovered in the
adversarial process of different features. Through experiments, we demonstrate
the superiority of the proposed algorithm over state-of-the-art methods. For
example, our method can reduce 57.8% parameters and 60.2% FLOPs of ResNet-101
with only 0.01% top-1 accuracy loss on ImageNet. The code is available at
https://github.com/huawei-noah/Pruning/tree/master/SCOP_NeurIPS2020.
Related papers
- Structured Network Pruning by Measuring Filter-wise Interactions [6.037167142826297]
We propose a structured network pruning approach SNPFI (Structured Network Pruning by measuring Filter-wise Interaction)
During the pruning, the SNPFI can automatically assign the proper sparsity based on the filter utilization strength.
We empirically demonstrate the effectiveness of the SNPFI with several commonly used CNN models.
arXiv Detail & Related papers (2023-07-03T05:26:05Z) - Network Pruning via Feature Shift Minimization [8.593369249204132]
We propose a novel Feature Shift Minimization (FSM) method to compress CNN models, which evaluates the feature shift by converging the information of both features and filters.
The proposed method yields state-of-the-art performance on various benchmark networks and datasets, verified by extensive experiments.
arXiv Detail & Related papers (2022-07-06T12:50:26Z) - End-to-End Sensitivity-Based Filter Pruning [49.61707925611295]
We present a sensitivity-based filter pruning algorithm (SbF-Pruner) to learn the importance scores of filters of each layer end-to-end.
Our method learns the scores from the filter weights, enabling it to account for the correlations between the filters of each layer.
arXiv Detail & Related papers (2022-04-15T10:21:05Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Filter Pruning using Hierarchical Group Sparse Regularization for Deep
Convolutional Neural Networks [3.5636461829966093]
We propose a filter pruning method using the hierarchical group sparse regularization.
It can reduce more than 50% parameters of ResNet for CIFAR-10 with only 0.3% decrease in the accuracy of test samples.
Also, 34% parameters of ResNet are reduced for TinyImageNet-200 with higher accuracy than the baseline network.
arXiv Detail & Related papers (2020-11-04T16:29:41Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z) - Filter Grafting for Deep Neural Networks: Reason, Method, and
Cultivation [86.91324735966766]
Filter is the key component in modern convolutional neural networks (CNNs)
In this paper, we introduce filter grafting (textbfMethod) to achieve this goal.
We develop a novel criterion to measure the information of filters and an adaptive weighting strategy to balance the grafted information among networks.
arXiv Detail & Related papers (2020-04-26T08:36:26Z) - A "Network Pruning Network" Approach to Deep Model Compression [62.68120664998911]
We present a filter pruning approach for deep model compression using a multitask network.
Our approach is based on learning a a pruner network to prune a pre-trained target network.
The compressed model produced by our approach is generic and does not need any special hardware/software support.
arXiv Detail & Related papers (2020-01-15T20:38:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.