Filter Pruning using Hierarchical Group Sparse Regularization for Deep
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2011.02389v1
- Date: Wed, 4 Nov 2020 16:29:41 GMT
- Title: Filter Pruning using Hierarchical Group Sparse Regularization for Deep
Convolutional Neural Networks
- Authors: Kakeru Mitsuno and Takio Kurita
- Abstract summary: We propose a filter pruning method using the hierarchical group sparse regularization.
It can reduce more than 50% parameters of ResNet for CIFAR-10 with only 0.3% decrease in the accuracy of test samples.
Also, 34% parameters of ResNet are reduced for TinyImageNet-200 with higher accuracy than the baseline network.
- Score: 3.5636461829966093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since the convolutional neural networks are often trained with redundant
parameters, it is possible to reduce redundant kernels or filters to obtain a
compact network without dropping the classification accuracy. In this paper, we
propose a filter pruning method using the hierarchical group sparse
regularization. It is shown in our previous work that the hierarchical group
sparse regularization is effective in obtaining sparse networks in which
filters connected to unnecessary channels are automatically close to zero.
After training the convolutional neural network with the hierarchical group
sparse regularization, the unnecessary filters are selected based on the
increase of the classification loss of the randomly selected training samples
to obtain a compact network. It is shown that the proposed method can reduce
more than 50% parameters of ResNet for CIFAR-10 with only 0.3% decrease in the
accuracy of test samples. Also, 34% parameters of ResNet are reduced for
TinyImageNet-200 with higher accuracy than the baseline network.
Related papers
- Structured Network Pruning by Measuring Filter-wise Interactions [6.037167142826297]
We propose a structured network pruning approach SNPFI (Structured Network Pruning by measuring Filter-wise Interaction)
During the pruning, the SNPFI can automatically assign the proper sparsity based on the filter utilization strength.
We empirically demonstrate the effectiveness of the SNPFI with several commonly used CNN models.
arXiv Detail & Related papers (2023-07-03T05:26:05Z) - Filter Pruning for Efficient CNNs via Knowledge-driven Differential
Filter Sampler [103.97487121678276]
Filter pruning simultaneously accelerates the computation and reduces the memory overhead of CNNs.
We propose a novel Knowledge-driven Differential Filter Sampler(KDFS) with Masked Filter Modeling(MFM) framework for filter pruning.
arXiv Detail & Related papers (2023-07-01T02:28:41Z) - Asymptotic Soft Cluster Pruning for Deep Neural Networks [5.311178623385279]
Filter pruning method introduces structural sparsity by removing selected filters.
We propose a novel filter pruning method called Asymptotic Soft Cluster Pruning.
Our method can achieve competitive results compared with many state-of-the-art algorithms.
arXiv Detail & Related papers (2022-06-16T13:58:58Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - SCOP: Scientific Control for Reliable Neural Network Pruning [127.20073865874636]
This paper proposes a reliable neural network pruning algorithm by setting up a scientific control.
Redundant filters can be discovered in the adversarial process of different features.
Our method can reduce 57.8% parameters and 60.2% FLOPs of ResNet-101 with only 0.01% top-1 accuracy loss on ImageNet.
arXiv Detail & Related papers (2020-10-21T03:02:01Z) - OrthoReg: Robust Network Pruning Using Orthonormality Regularization [7.754712828900727]
We propose a principled regularization strategy that enforces orthonormality on a network's filters to reduce inter-filter correlation.
When used for iterative pruning on VGG-13, MobileNet-V1, and ResNet-34, OrthoReg consistently outperforms five baseline techniques.
arXiv Detail & Related papers (2020-09-10T17:21:21Z) - Cross-filter compression for CNN inference acceleration [4.324080238456531]
We propose a new cross-filter compression method that can provide $sim32times$ memory savings and $122times$ speed up in convolution operations.
Our method, based on Binary-Weight and XNOR-Net separately, is evaluated on CIFAR-10 and ImageNet dataset.
arXiv Detail & Related papers (2020-05-18T19:06:14Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Filter Sketch for Network Pruning [184.41079868885265]
We propose a novel network pruning approach by information preserving of pre-trained network weights (filters)
Our approach, referred to as FilterSketch, encodes the second-order information of pre-trained weights.
Experiments on CIFAR-10 show that FilterSketch reduces 63.3% of FLOPs and prunes 59.9% of network parameters with negligible accuracy cost.
arXiv Detail & Related papers (2020-01-23T13:57:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.