Feature Flow Regularization: Improving Structured Sparsity in Deep
Neural Networks
- URL: http://arxiv.org/abs/2106.02914v1
- Date: Sat, 5 Jun 2021 15:00:50 GMT
- Title: Feature Flow Regularization: Improving Structured Sparsity in Deep
Neural Networks
- Authors: Yue Wu, Yuan Lan, Luchan Zhang, Yang Xiang
- Abstract summary: Pruning is a model compression method that removes redundant parameters in deep neural networks (DNNs)
We propose a simple and effective regularization strategy from a new perspective of evolution of features, which we call feature flow regularization (FFR)
Experiments with VGGNets, ResNets on CIFAR-10/100, and Tiny ImageNet datasets demonstrate that FFR can significantly improve both unstructured and structured sparsity.
- Score: 12.541769091896624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pruning is a model compression method that removes redundant parameters in
deep neural networks (DNNs) while maintaining accuracy. Most available filter
pruning methods require complex treatments such as iterative pruning, features
statistics/ranking, or additional optimization designs in the training process.
In this paper, we propose a simple and effective regularization strategy from a
new perspective of evolution of features, which we call feature flow
regularization (FFR), for improving structured sparsity and filter pruning in
DNNs. Specifically, FFR imposes controls on the gradient and curvature of
feature flow along the neural network, which implicitly increases the sparsity
of the parameters. The principle behind FFR is that coherent and smooth
evolution of features will lead to an efficient network that avoids redundant
parameters. The high structured sparsity obtained from FFR enables us to prune
filters effectively. Experiments with VGGNets, ResNets on CIFAR-10/100, and
Tiny ImageNet datasets demonstrate that FFR can significantly improve both
unstructured and structured sparsity. Our pruning results in terms of reduction
of parameters and FLOPs are comparable to or even better than those of
state-of-the-art pruning methods.
Related papers
- Dynamic Structure Pruning for Compressing CNNs [13.73717878732162]
We introduce a novel structure pruning method, termed as dynamic structure pruning, to identify optimal pruning granularities for intra-channel pruning.
The experimental results show that dynamic structure pruning achieves state-of-the-art pruning performance and better realistic acceleration on a GPU compared with channel pruning.
arXiv Detail & Related papers (2023-03-17T02:38:53Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Learning k-Level Structured Sparse Neural Networks Using Group Envelope Regularization [4.0554893636822]
We introduce a novel approach to deploy large-scale Deep Neural Networks on constrained resources.
The method speeds up inference time and aims to reduce memory demand and power consumption.
arXiv Detail & Related papers (2022-12-25T15:40:05Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - Trainability Preserving Neural Structured Pruning [64.65659982877891]
We present trainability preserving pruning (TPP), a regularization-based structured pruning method that can effectively maintain trainability during sparsification.
TPP can compete with the ground-truth dynamical isometry recovery method on linear networks.
It delivers encouraging performance in comparison to many top-performing filter pruning methods.
arXiv Detail & Related papers (2022-07-25T21:15:47Z) - Boosting Pruned Networks with Linear Over-parameterization [8.796518772724955]
Structured pruning compresses neural networks by reducing channels (filters) for fast inference and low footprint at run-time.
To restore accuracy after pruning, fine-tuning is usually applied to pruned networks.
We propose a novel method that first linearly over- parameterizes the compact layers in pruned networks to enlarge the number of fine-tuning parameters.
arXiv Detail & Related papers (2022-04-25T05:30:26Z) - Interspace Pruning: Using Adaptive Filter Representations to Improve
Training of Sparse CNNs [69.3939291118954]
Unstructured pruning is well suited to reduce the memory footprint of convolutional neural networks (CNNs)
Standard unstructured pruning (SP) reduces the memory footprint of CNNs by setting filter elements to zero.
We introduce interspace pruning (IP), a general tool to improve existing pruning methods.
arXiv Detail & Related papers (2022-03-15T11:50:45Z) - RGP: Neural Network Pruning through Its Regular Graph Structure [6.0686251332936365]
We study the graph structure of the neural network, and propose regular graph based pruning (RGP) to perform a one-shot neural network pruning.
Experiments show that the average shortest path length of the graph is negatively correlated with the classification accuracy of the corresponding neural network.
arXiv Detail & Related papers (2021-10-28T15:08:32Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.