Structured Network Pruning by Measuring Filter-wise Interactions
- URL: http://arxiv.org/abs/2307.00758v1
- Date: Mon, 3 Jul 2023 05:26:05 GMT
- Title: Structured Network Pruning by Measuring Filter-wise Interactions
- Authors: Wenting Tang, Xingxing Wei, Bo Li (Beijing Key Laboratory of Digital
Media, School of Computer Science and Engineering, Beihang University,
Beijing, China)
- Abstract summary: We propose a structured network pruning approach SNPFI (Structured Network Pruning by measuring Filter-wise Interaction)
During the pruning, the SNPFI can automatically assign the proper sparsity based on the filter utilization strength.
We empirically demonstrate the effectiveness of the SNPFI with several commonly used CNN models.
- Score: 6.037167142826297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structured network pruning is a practical approach to reduce computation cost
directly while retaining the CNNs' generalization performance in real
applications. However, identifying redundant filters is a core problem in
structured network pruning, and current redundancy criteria only focus on
individual filters' attributes. When pruning sparsity increases, these
redundancy criteria are not effective or efficient enough. Since the
filter-wise interaction also contributes to the CNN's prediction accuracy, we
integrate the filter-wise interaction into the redundancy criterion. In our
criterion, we introduce the filter importance and filter utilization strength
to reflect the decision ability of individual and multiple filters. Utilizing
this new redundancy criterion, we propose a structured network pruning approach
SNPFI (Structured Network Pruning by measuring Filter-wise Interaction). During
the pruning, the SNPFI can automatically assign the proper sparsity based on
the filter utilization strength and eliminate the useless filters by filter
importance. After the pruning, the SNPFI can recover pruned model's performance
effectively without iterative training by minimizing the interaction
difference. We empirically demonstrate the effectiveness of the SNPFI with
several commonly used CNN models, including AlexNet, MobileNetv1, and
ResNet-50, on various image classification datasets, including MNIST, CIFAR-10,
and ImageNet. For all experimental CNN models, nearly 60% of computation is
reduced in a network compression while the classification accuracy remains.
Related papers
- Efficient CNNs via Passive Filter Pruning [23.661189257759535]
Convolutional neural networks (CNNs) have shown state-of-the-art performance in various applications.
CNNs are resource-hungry due to their requirement of high computational complexity and memory storage.
Recent efforts toward achieving computational efficiency in CNNs involve filter pruning methods.
arXiv Detail & Related papers (2023-04-05T09:19:19Z) - Asymptotic Soft Cluster Pruning for Deep Neural Networks [5.311178623385279]
Filter pruning method introduces structural sparsity by removing selected filters.
We propose a novel filter pruning method called Asymptotic Soft Cluster Pruning.
Our method can achieve competitive results compared with many state-of-the-art algorithms.
arXiv Detail & Related papers (2022-06-16T13:58:58Z) - A Passive Similarity based CNN Filter Pruning for Efficient Acoustic
Scene Classification [23.661189257759535]
We present a method to develop low-complexity convolutional neural networks (CNNs) for acoustic scene classification (ASC)
We propose a passive filter pruning framework, where a few convolutional filters from the CNNs are eliminated to yield compressed CNNs.
The proposed method is simple, reduces computations per inference by 27%, with 25% fewer parameters, with less than 1% drop in accuracy.
arXiv Detail & Related papers (2022-03-29T17:00:06Z) - Interspace Pruning: Using Adaptive Filter Representations to Improve
Training of Sparse CNNs [69.3939291118954]
Unstructured pruning is well suited to reduce the memory footprint of convolutional neural networks (CNNs)
Standard unstructured pruning (SP) reduces the memory footprint of CNNs by setting filter elements to zero.
We introduce interspace pruning (IP), a general tool to improve existing pruning methods.
arXiv Detail & Related papers (2022-03-15T11:50:45Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Manipulating Identical Filter Redundancy for Efficient Pruning on Deep
and Complicated CNN [126.88224745942456]
We propose a novel Centripetal SGD (C-SGD) to make some filters identical, resulting in ideal redundancy patterns.
C-SGD delivers better performance because the redundancy is better organized, compared to the existing methods.
arXiv Detail & Related papers (2021-07-30T06:18:19Z) - Unsharp Mask Guided Filtering [53.14430987860308]
The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering.
We propose a new and simplified formulation of the guided filter inspired by unsharp masking.
Our formulation enjoys a filtering prior to a low-pass filter and enables explicit structure transfer by estimating a single coefficient.
arXiv Detail & Related papers (2021-06-02T19:15:34Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z) - Filter Grafting for Deep Neural Networks: Reason, Method, and
Cultivation [86.91324735966766]
Filter is the key component in modern convolutional neural networks (CNNs)
In this paper, we introduce filter grafting (textbfMethod) to achieve this goal.
We develop a novel criterion to measure the information of filters and an adaptive weighting strategy to balance the grafted information among networks.
arXiv Detail & Related papers (2020-04-26T08:36:26Z) - Pruning CNN's with linear filter ensembles [0.0]
We use pruning to reduce the network size and -- implicitly -- the number of floating point operations (FLOPs)
We develop a novel filter importance norm that is based on the change in the empirical loss caused by the presence or removal of a component from the network architecture.
We evaluate our method on a fully connected network, as well as on the ResNet architecture trained on the CIFAR-10 dataset.
arXiv Detail & Related papers (2020-01-22T16:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.