Entropy Induced Pruning Framework for Convolutional Neural Networks
- URL: http://arxiv.org/abs/2208.06660v1
- Date: Sat, 13 Aug 2022 14:35:08 GMT
- Title: Entropy Induced Pruning Framework for Convolutional Neural Networks
- Authors: Yiheng Lu, Ziyu Guan, Yaming Yang, Maoguo Gong, Wei Zhao, Kaiyuan Feng
- Abstract summary: We propose a metric named Average Filter Information Entropy (AFIE) to measure the importance of each filter.
The proposed framework is able to yield a stable importance evaluation of each filter no matter whether the original model is trained fully.
- Score: 30.89967076857665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structured pruning techniques have achieved great compression performance on
convolutional neural networks for image classification task. However, the
majority of existing methods are weight-oriented, and their pruning results may
be unsatisfactory when the original model is trained poorly. That is, a
fully-trained model is required to provide useful weight information. This may
be time-consuming, and the pruning results are sensitive to the updating
process of model parameters. In this paper, we propose a metric named Average
Filter Information Entropy (AFIE) to measure the importance of each filter. It
is calculated by three major steps, i.e., low-rank decomposition of the
"input-output" matrix of each convolutional layer, normalization of the
obtained eigenvalues, and calculation of filter importance based on information
entropy. By leveraging the proposed AFIE, the proposed framework is able to
yield a stable importance evaluation of each filter no matter whether the
original model is trained fully. We implement our AFIE based on AlexNet,
VGG-16, and ResNet-50, and test them on MNIST, CIFAR-10, and ImageNet,
respectively. The experimental results are encouraging. We surprisingly observe
that for our methods, even when the original model is only trained with one
epoch, the importance evaluation of each filter keeps identical to the results
when the model is fully-trained. This indicates that the proposed pruning
strategy can perform effectively at the beginning stage of the training process
for the original model.
Related papers
- Learning effective pruning at initialization from iterative pruning [15.842658282636876]
We present an end-to-end neural network-based PaI method to reduce training costs.
Our approach outperforms existing methods in high-sparsity settings.
As the first neural network-based PaI method, we conduct extensive experiments to validate the factors influencing this approach.
arXiv Detail & Related papers (2024-08-27T03:17:52Z) - Unsupervised textile defect detection using convolutional neural
networks [0.0]
We propose a novel motif-based approach for unsupervised textile anomaly detection.
It combines the benefits of traditional convolutional neural networks with those of an unsupervised learning paradigm.
We demonstrate the effectiveness of our approach on the Patterned Fabrics benchmark dataset.
arXiv Detail & Related papers (2023-11-30T22:08:06Z) - Filter Pruning for Efficient CNNs via Knowledge-driven Differential
Filter Sampler [103.97487121678276]
Filter pruning simultaneously accelerates the computation and reduces the memory overhead of CNNs.
We propose a novel Knowledge-driven Differential Filter Sampler(KDFS) with Masked Filter Modeling(MFM) framework for filter pruning.
arXiv Detail & Related papers (2023-07-01T02:28:41Z) - Boosting Low-Data Instance Segmentation by Unsupervised Pre-training
with Saliency Prompt [103.58323875748427]
This work offers a novel unsupervised pre-training solution for low-data regimes.
Inspired by the recent success of the Prompting technique, we introduce a new pre-training method that boosts QEIS models.
Experimental results show that our method significantly boosts several QEIS models on three datasets.
arXiv Detail & Related papers (2023-02-02T15:49:03Z) - Impact of PolSAR pre-processing and balancing methods on complex-valued
neural networks segmentation tasks [9.6556424340252]
We investigate the semantic segmentation of Polarimetric Synthetic Aperture Radar (PolSAR) using Complex-Valued Neural Network (CVNN)
We exhaustively compare both methods for six model architectures, three complex-valued, and their respective real-equivalent models.
We propose two methods for reducing this gap and performing the results for all input representations, models, and dataset pre-processing.
arXiv Detail & Related papers (2022-10-28T12:49:43Z) - SBPF: Sensitiveness Based Pruning Framework For Convolutional Neural
Network On Image Classification [23.271501988351268]
Pruning techniques are used to compress convolutional neural networks (CNNs) on image classification.
We propose a sensitiveness based method to evaluate the importance of each layer from the perspective of inference accuracy.
arXiv Detail & Related papers (2022-08-09T08:05:19Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Effective Model Sparsification by Scheduled Grow-and-Prune Methods [73.03533268740605]
We propose a novel scheduled grow-and-prune (GaP) methodology without pre-training the dense models.
Experiments have shown that such models can match or beat the quality of highly optimized dense models at 80% sparsity on a variety of tasks.
arXiv Detail & Related papers (2021-06-18T01:03:13Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.