Filter Pruning via Filters Similarity in Consecutive Layers
- URL: http://arxiv.org/abs/2304.13397v1
- Date: Wed, 26 Apr 2023 09:18:38 GMT
- Title: Filter Pruning via Filters Similarity in Consecutive Layers
- Authors: Xiaorui Wang, Jun Wang, Xin Tang, Peng Gao, Rui Fang, Guotong Xie
- Abstract summary: Filter pruning is widely adopted to compress and accelerate the Convolutional Neural Networks (CNNs)
We intuitively propose a novel pruning method by explicitly leveraging the Filters Similarity in Consecutive Layers (FSCL)
Experiments demonstrate the effectiveness of FSCL, and it yields remarkable improvement over state-of-the-art on accuracy, FLOPs and parameter reduction.
- Score: 20.29555787754269
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Filter pruning is widely adopted to compress and accelerate the Convolutional
Neural Networks (CNNs), but most previous works ignore the relationship between
filters and channels in different layers. Processing each layer independently
fails to utilize the collaborative relationship across layers. In this paper,
we intuitively propose a novel pruning method by explicitly leveraging the
Filters Similarity in Consecutive Layers (FSCL). FSCL compresses models by
pruning filters whose corresponding features are more worthless in the model.
The extensive experiments demonstrate the effectiveness of FSCL, and it yields
remarkable improvement over state-of-the-art on accuracy, FLOPs and parameter
reduction on several benchmark models and datasets.
Related papers
- Structured Network Pruning by Measuring Filter-wise Interactions [6.037167142826297]
We propose a structured network pruning approach SNPFI (Structured Network Pruning by measuring Filter-wise Interaction)
During the pruning, the SNPFI can automatically assign the proper sparsity based on the filter utilization strength.
We empirically demonstrate the effectiveness of the SNPFI with several commonly used CNN models.
arXiv Detail & Related papers (2023-07-03T05:26:05Z) - Filter Pruning for Efficient CNNs via Knowledge-driven Differential
Filter Sampler [103.97487121678276]
Filter pruning simultaneously accelerates the computation and reduces the memory overhead of CNNs.
We propose a novel Knowledge-driven Differential Filter Sampler(KDFS) with Masked Filter Modeling(MFM) framework for filter pruning.
arXiv Detail & Related papers (2023-07-01T02:28:41Z) - Focus Your Attention (with Adaptive IIR Filters) [62.80628327613344]
We present a new layer in which dynamic (i.e.,input-dependent) Infinite Impulse Response (IIR) filters of order two are used to process the input sequence.
Despite their relatively low order, the causal adaptive filters are shown to focus attention on the relevant sequence elements.
arXiv Detail & Related papers (2023-05-24T09:42:30Z) - Asymptotic Soft Cluster Pruning for Deep Neural Networks [5.311178623385279]
Filter pruning method introduces structural sparsity by removing selected filters.
We propose a novel filter pruning method called Asymptotic Soft Cluster Pruning.
Our method can achieve competitive results compared with many state-of-the-art algorithms.
arXiv Detail & Related papers (2022-06-16T13:58:58Z) - Reconstructing Pruned Filters using Cheap Spatial Transformations [22.698845243751293]
We present an efficient alternative to the convolutional layer using cheap spatial transformations.
This construction exploits an inherent spatial redundancy of the learned convolutional filters.
We show that these networks can achieve comparable or improved performance to state-of-the-art pruning models.
arXiv Detail & Related papers (2021-10-25T12:13:45Z) - Learning Versatile Convolution Filters for Efficient Visual Recognition [125.34595948003745]
This paper introduces versatile filters to construct efficient convolutional neural networks.
We conduct theoretical analysis on network complexity and an efficient convolution scheme is introduced.
Experimental results on benchmark datasets and neural networks demonstrate that our versatile filters are able to achieve comparable accuracy as that of original filters.
arXiv Detail & Related papers (2021-09-20T06:07:14Z) - Unsharp Mask Guided Filtering [53.14430987860308]
The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering.
We propose a new and simplified formulation of the guided filter inspired by unsharp masking.
Our formulation enjoys a filtering prior to a low-pass filter and enables explicit structure transfer by estimating a single coefficient.
arXiv Detail & Related papers (2021-06-02T19:15:34Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z) - MINT: Deep Network Compression via Mutual Information-based Neuron
Trimming [32.449324736645586]
Mutual Information-based Neuron Trimming (MINT) approaches deep compression via pruning.
MINT enforces sparsity based on the strength of the relationship between filters of adjacent layers.
When pruning a network, we ensure that retained filters contribute the majority of the information towards succeeding layers.
arXiv Detail & Related papers (2020-03-18T21:05:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.