Toward domain generalized pruning by scoring out-of-distribution
importance
- URL: http://arxiv.org/abs/2210.13810v1
- Date: Tue, 25 Oct 2022 07:36:55 GMT
- Title: Toward domain generalized pruning by scoring out-of-distribution
importance
- Authors: Rizhao Cai, Haoliang Li, Alex Kot
- Abstract summary: Filter pruning has been widely used for compressing convolutional neural networks to reduce computation costs during the deployment stage.
We conduct extensive empirical experiments and reveal that although the intra-domain performance could be maintained after filter pruning, the cross-domain performance will decay to a large extent.
Experiments show that under the same pruning ratio, our method can achieve significantly better cross-domain generalization performance than the baseline filter pruning method.
- Score: 19.26591372002258
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Filter pruning has been widely used for compressing convolutional neural
networks to reduce computation costs during the deployment stage. Recent
studies have shown that filter pruning techniques can achieve lossless
compression of deep neural networks, reducing redundant filters (kernels)
without sacrificing accuracy performance. However, the evaluation is done when
the training and testing data are from similar environmental conditions
(independent and identically distributed), and how the filter pruning
techniques would affect the cross-domain generalization (out-of-distribution)
performance is largely ignored. We conduct extensive empirical experiments and
reveal that although the intra-domain performance could be maintained after
filter pruning, the cross-domain performance will decay to a large extent. As
scoring a filter's importance is one of the central problems for pruning, we
design the importance scoring estimation by using the variance of domain-level
risks to consider the pruning risk in the unseen distribution. As such, we can
remain more domain generalized filters. The experiments show that under the
same pruning ratio, our method can achieve significantly better cross-domain
generalization performance than the baseline filter pruning method. For the
first attempt, our work sheds light on the joint problem of domain
generalization and filter pruning research.
Related papers
- Efficient CNNs via Passive Filter Pruning [23.661189257759535]
Convolutional neural networks (CNNs) have shown state-of-the-art performance in various applications.
CNNs are resource-hungry due to their requirement of high computational complexity and memory storage.
Recent efforts toward achieving computational efficiency in CNNs involve filter pruning methods.
arXiv Detail & Related papers (2023-04-05T09:19:19Z) - Asymptotic Soft Cluster Pruning for Deep Neural Networks [5.311178623385279]
Filter pruning method introduces structural sparsity by removing selected filters.
We propose a novel filter pruning method called Asymptotic Soft Cluster Pruning.
Our method can achieve competitive results compared with many state-of-the-art algorithms.
arXiv Detail & Related papers (2022-06-16T13:58:58Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Data Agnostic Filter Gating for Efficient Deep Networks [72.4615632234314]
Current filter pruning methods mainly leverage feature maps to generate important scores for filters and prune those with smaller scores.
In this paper, we propose a data filter pruning method that uses an auxiliary network named Dagger module to induce pruning.
In addition, to help prune filters with certain FLOPs constraints, we leverage an explicit FLOPs-aware regularization to directly promote pruning filters toward target FLOPs.
arXiv Detail & Related papers (2020-10-28T15:26:40Z) - SCOP: Scientific Control for Reliable Neural Network Pruning [127.20073865874636]
This paper proposes a reliable neural network pruning algorithm by setting up a scientific control.
Redundant filters can be discovered in the adversarial process of different features.
Our method can reduce 57.8% parameters and 60.2% FLOPs of ResNet-101 with only 0.01% top-1 accuracy loss on ImageNet.
arXiv Detail & Related papers (2020-10-21T03:02:01Z) - OrthoReg: Robust Network Pruning Using Orthonormality Regularization [7.754712828900727]
We propose a principled regularization strategy that enforces orthonormality on a network's filters to reduce inter-filter correlation.
When used for iterative pruning on VGG-13, MobileNet-V1, and ResNet-34, OrthoReg consistently outperforms five baseline techniques.
arXiv Detail & Related papers (2020-09-10T17:21:21Z) - To Filter Prune, or to Layer Prune, That Is The Question [13.450136532402226]
We show the limitation of filter pruning methods in terms of latency reduction.
We present a set of layer pruning methods based on different criteria that achieve higher latency reduction than filter pruning methods on similar accuracy.
LayerPrune also outperforms handcrafted architectures such as Shufflenet, MobileNet, MNASNet and ResNet18 by 7.3%, 4.6%, 2.8% and 0.5% respectively on similar latency budget on ImageNet dataset.
arXiv Detail & Related papers (2020-07-11T02:51:40Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z) - Filter Grafting for Deep Neural Networks: Reason, Method, and
Cultivation [86.91324735966766]
Filter is the key component in modern convolutional neural networks (CNNs)
In this paper, we introduce filter grafting (textbfMethod) to achieve this goal.
We develop a novel criterion to measure the information of filters and an adaptive weighting strategy to balance the grafted information among networks.
arXiv Detail & Related papers (2020-04-26T08:36:26Z) - Convolution-Weight-Distribution Assumption: Rethinking the Criteria of
Channel Pruning [90.2947802490534]
We find two blind spots in the study of pruning criteria.
The ranks of filters'Importance Score are almost identical, resulting in similar pruned structures.
The filters'Importance Score measured by some pruning criteria are too close to distinguish the network redundancy well.
arXiv Detail & Related papers (2020-04-24T09:54:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.