A pruning method based on the dissimilarity of angle among channels and
filters
- URL: http://arxiv.org/abs/2210.16504v1
- Date: Sat, 29 Oct 2022 05:47:57 GMT
- Title: A pruning method based on the dissimilarity of angle among channels and
filters
- Authors: Jiayi Yao, Ping Li, Xiatao Kang, Yuzhe Wang
- Abstract summary: We encode the convolution network to obtain the similarity of different encoding nodes.
We evaluate the connectivity-power among convolutional kernels on the basis of similarity.
We propose Channel Pruning base on the Dissimilarity of Angle (DACP)
- Score: 13.878426750493784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional Neural Network (CNN) is more and more widely used in various
fileds, and its computation and memory-demand are also increasing
significantly. In order to make it applicable to limited conditions such as
embedded application, network compression comes out. Among them, researchers
pay more attention to network pruning. In this paper, we encode the convolution
network to obtain the similarity of different encoding nodes, and evaluate the
connectivity-power among convolutional kernels on the basis of the similarity.
Then impose different level of penalty according to different
connectivity-power. Meanwhile, we propose Channel Pruning base on the
Dissimilarity of Angle (DACP). Firstly, we train a sparse model by GL penalty,
and impose an angle dissimilarity constraint on the channels and filters of
convolutional network to obtain a more sparse structure. Eventually, the
effectiveness of our method is demonstrated in the section of experiment. On
CIFAR-10, we reduce 66.86% FLOPs on VGG-16 with 93.31% accuracy after pruning,
where FLOPs represents the number of floating-point operations per second of
the model. Moreover, on ResNet-32, we reduce FLOPs by 58.46%, which makes the
accuracy after pruning reach 91.76%.
Related papers
- Filter Pruning For CNN With Enhanced Linear Representation Redundancy [3.853146967741941]
We present a data-driven loss function term calculated from the correlation matrix of different feature maps in the same layer, named CCM-loss.
CCM-loss provides us with another universal transcendental mathematical tool besides L*-norm regularization.
In our new strategy, we mainly focus on the consistency and integrality of the information flow in the network.
arXiv Detail & Related papers (2023-10-10T06:27:30Z) - End-to-End Sensitivity-Based Filter Pruning [49.61707925611295]
We present a sensitivity-based filter pruning algorithm (SbF-Pruner) to learn the importance scores of filters of each layer end-to-end.
Our method learns the scores from the filter weights, enabling it to account for the correlations between the filters of each layer.
arXiv Detail & Related papers (2022-04-15T10:21:05Z) - Group Fisher Pruning for Practical Network Compression [58.25776612812883]
We present a general channel pruning approach that can be applied to various complicated structures.
We derive a unified metric based on Fisher information to evaluate the importance of a single channel and coupled channels.
Our method can be used to prune any structures including those with coupled channels.
arXiv Detail & Related papers (2021-08-02T08:21:44Z) - ACP: Automatic Channel Pruning via Clustering and Swarm Intelligence
Optimization for CNN [6.662639002101124]
convolutional neural network (CNN) gets deeper and wider in recent years.
Existing magnitude-based pruning methods are efficient, but the performance of the compressed network is unpredictable.
We propose a novel automatic channel pruning method (ACP)
ACP is evaluated against several state-of-the-art CNNs on three different classification datasets.
arXiv Detail & Related papers (2021-01-16T08:56:38Z) - Layer Pruning via Fusible Residual Convolutional Block for Deep Neural
Networks [15.64167076052513]
layer pruning has less inference time and runtime memory usage when the same FLOPs and number of parameters are pruned.
We propose a simple layer pruning method using residual convolutional block (ResConv)
Our pruning method achieves excellent performance of compression and acceleration over the state-thearts on different datasets.
arXiv Detail & Related papers (2020-11-29T12:51:16Z) - SCOP: Scientific Control for Reliable Neural Network Pruning [127.20073865874636]
This paper proposes a reliable neural network pruning algorithm by setting up a scientific control.
Redundant filters can be discovered in the adversarial process of different features.
Our method can reduce 57.8% parameters and 60.2% FLOPs of ResNet-101 with only 0.01% top-1 accuracy loss on ImageNet.
arXiv Detail & Related papers (2020-10-21T03:02:01Z) - UCP: Uniform Channel Pruning for Deep Convolutional Neural Networks
Compression and Acceleration [24.42067007684169]
We propose a novel uniform channel pruning (UCP) method to prune deep CNN.
The unimportant channels, including convolutional kernels related to them, are pruned directly.
We verify our method on CIFAR-10, CIFAR-100 and ILSVRC-2012 for image classification.
arXiv Detail & Related papers (2020-10-03T01:51:06Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z) - DMCP: Differentiable Markov Channel Pruning for Neural Networks [67.51334229530273]
We propose a novel differentiable method for channel pruning, named Differentiable Markov Channel Pruning (DMCP)
Our method is differentiable and can be directly optimized by gradient descent with respect to standard task loss and budget regularization.
To validate the effectiveness of our method, we perform extensive experiments on Imagenet with ResNet and MobilenetV2.
arXiv Detail & Related papers (2020-05-07T09:39:55Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.