UCP: Uniform Channel Pruning for Deep Convolutional Neural Networks
Compression and Acceleration
- URL: http://arxiv.org/abs/2010.01251v1
- Date: Sat, 3 Oct 2020 01:51:06 GMT
- Title: UCP: Uniform Channel Pruning for Deep Convolutional Neural Networks
Compression and Acceleration
- Authors: Jingfei Chang and Yang Lu and Ping Xue and Xing Wei and Zhen Wei
- Abstract summary: We propose a novel uniform channel pruning (UCP) method to prune deep CNN.
The unimportant channels, including convolutional kernels related to them, are pruned directly.
We verify our method on CIFAR-10, CIFAR-100 and ILSVRC-2012 for image classification.
- Score: 24.42067007684169
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To apply deep CNNs to mobile terminals and portable devices, many scholars
have recently worked on the compressing and accelerating deep convolutional
neural networks. Based on this, we propose a novel uniform channel pruning
(UCP) method to prune deep CNN, and the modified squeeze-and-excitation blocks
(MSEB) is used to measure the importance of the channels in the convolutional
layers. The unimportant channels, including convolutional kernels related to
them, are pruned directly, which greatly reduces the storage cost and the
number of calculations. There are two types of residual blocks in ResNet. For
ResNet with bottlenecks, we use the pruning method with traditional CNN to trim
the 3x3 convolutional layer in the middle of the blocks. For ResNet with basic
residual blocks, we propose an approach to consistently prune all residual
blocks in the same stage to ensure that the compact network structure is
dimensionally correct. Considering that the network loses considerable
information after pruning and that the larger the pruning amplitude is, the
more information that will be lost, we do not choose fine-tuning but retrain
from scratch to restore the accuracy of the network after pruning. Finally, we
verified our method on CIFAR-10, CIFAR-100 and ILSVRC-2012 for image
classification. The results indicate that the performance of the compact
network after retraining from scratch, when the pruning rate is small, is
better than the original network. Even when the pruning amplitude is large, the
accuracy can be maintained or decreased slightly. On the CIFAR-100, when
reducing the parameters and FLOPs up to 82% and 62% respectively, the accuracy
of VGG-19 even improved by 0.54% after retraining.
Related papers
- Instant Complexity Reduction in CNNs using Locality-Sensitive Hashing [50.79602839359522]
We propose HASTE (Hashing for Tractable Efficiency), a parameter-free and data-free module that acts as a plug-and-play replacement for any regular convolution module.
We are able to drastically compress latent feature maps without sacrificing much accuracy by using locality-sensitive hashing (LSH)
In particular, we are able to instantly drop 46.72% of FLOPs while only losing 1.25% accuracy by just swapping the convolution modules in a ResNet34 on CIFAR-10 for our HASTE module.
arXiv Detail & Related papers (2023-09-29T13:09:40Z) - Pruning Very Deep Neural Network Channels for Efficient Inference [6.497816402045099]
Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer.
VGG-16 achieves the state-of-the-art results by 5x speed-up along with only 0.3% increase of error.
Our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4%, 1.0% accuracy loss under 2x speed-up respectively.
arXiv Detail & Related papers (2022-11-14T06:48:33Z) - CHEX: CHannel EXploration for CNN Model Compression [47.3520447163165]
We propose a novel Channel Exploration methodology, dubbed as CHEX, to rectify these problems.
CheX repeatedly prunes and regrows the channels throughout the training process, which reduces the risk of pruning important channels prematurely.
Results demonstrate that CHEX can effectively reduce the FLOPs of diverse CNN architectures on a variety of computer vision tasks.
arXiv Detail & Related papers (2022-03-29T17:52:41Z) - AdaPruner: Adaptive Channel Pruning and Effective Weights Inheritance [9.3421559369389]
We propose a pruning framework that adaptively determines the number of each layer's channels as well as the wights inheritance criteria for sub-network.
AdaPruner allows to obtain pruned network quickly, accurately and efficiently.
On ImageNet, we reduce 32.8% FLOPs of MobileNetV2 with only 0.62% decrease for top-1 accuracy, which exceeds all previous state-of-the-art channel pruning methods.
arXiv Detail & Related papers (2021-09-14T01:52:05Z) - AIP: Adversarial Iterative Pruning Based on Knowledge Transfer for
Convolutional Neural Networks [7.147985297123097]
convolutional neural networks (CNNs) take a fair amount of computation cost.
Current pruning methods can compress CNNs with little performance drop, but when the pruning ratio increases, the accuracy loss is more serious.
We propose a novel adversarial iterative pruning method (AIP) for CNNs based on knowledge transfer.
arXiv Detail & Related papers (2021-08-31T02:38:36Z) - Group Fisher Pruning for Practical Network Compression [58.25776612812883]
We present a general channel pruning approach that can be applied to various complicated structures.
We derive a unified metric based on Fisher information to evaluate the importance of a single channel and coupled channels.
Our method can be used to prune any structures including those with coupled channels.
arXiv Detail & Related papers (2021-08-02T08:21:44Z) - ACP: Automatic Channel Pruning via Clustering and Swarm Intelligence
Optimization for CNN [6.662639002101124]
convolutional neural network (CNN) gets deeper and wider in recent years.
Existing magnitude-based pruning methods are efficient, but the performance of the compressed network is unpredictable.
We propose a novel automatic channel pruning method (ACP)
ACP is evaluated against several state-of-the-art CNNs on three different classification datasets.
arXiv Detail & Related papers (2021-01-16T08:56:38Z) - Layer Pruning via Fusible Residual Convolutional Block for Deep Neural
Networks [15.64167076052513]
layer pruning has less inference time and runtime memory usage when the same FLOPs and number of parameters are pruned.
We propose a simple layer pruning method using residual convolutional block (ResConv)
Our pruning method achieves excellent performance of compression and acceleration over the state-thearts on different datasets.
arXiv Detail & Related papers (2020-11-29T12:51:16Z) - ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting [105.97936163854693]
We propose ResRep, which slims down a CNN by reducing the width (number of output channels) of convolutional layers.
Inspired by the neurobiology research about the independence of remembering and forgetting, we propose to re- parameterize a CNN into the remembering parts and forgetting parts.
We equivalently merge the remembering and forgetting parts into the original architecture with narrower layers.
arXiv Detail & Related papers (2020-07-07T07:56:45Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Discrimination-aware Network Pruning for Deep Model Compression [79.44318503847136]
Existing pruning methods either train from scratch with sparsity constraints or minimize the reconstruction error between the feature maps of the pre-trained models and the compressed ones.
We propose a simple-yet-effective method called discrimination-aware channel pruning (DCP) to choose the channels that actually contribute to the discriminative power.
Experiments on both image classification and face recognition demonstrate the effectiveness of our methods.
arXiv Detail & Related papers (2020-01-04T07:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.