Slimming Neural Networks using Adaptive Connectivity Scores
- URL: http://arxiv.org/abs/2006.12463v3
- Date: Fri, 17 Dec 2021 21:44:23 GMT
- Title: Slimming Neural Networks using Adaptive Connectivity Scores
- Authors: Madan Ravi Ganesh, Dawsin Blanchard, Jason J. Corso and Salimeh Yasaei
Sekeh
- Abstract summary: We propose a new single-shot, fully automated pruning algorithm called Slimming Neural networks using Adaptive Connectivity Scores (SNACS)
Our proposed approach combines a probabilistic pruning framework with constraints on the underlying weight matrices.
SNACS is faster by over 17x the nearest comparable method.
- Score: 28.872080203221934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In general, deep neural network (DNN) pruning methods fall into two
categories: 1) Weight-based deterministic constraints, and 2) Probabilistic
frameworks. While each approach has its merits and limitations there are a set
of common practical issues such as, trial-and-error to analyze sensitivity and
hyper-parameters to prune DNNs, which plague them both. In this work, we
propose a new single-shot, fully automated pruning algorithm called Slimming
Neural networks using Adaptive Connectivity Scores (SNACS). Our proposed
approach combines a probabilistic pruning framework with constraints on the
underlying weight matrices, via a novel connectivity measure, at multiple
levels to capitalize on the strengths of both approaches while solving their
deficiencies. In \alg{}, we propose a fast hash-based estimator of Adaptive
Conditional Mutual Information (ACMI), that uses a weight-based scaling
criterion, to evaluate the connectivity between filters and prune unimportant
ones. To automatically determine the limit up to which a layer can be pruned,
we propose a set of operating constraints that jointly define the upper pruning
percentage limits across all the layers in a deep network. Finally, we define a
novel sensitivity criterion for filters that measures the strength of their
contributions to the succeeding layer and highlights critical filters that need
to be completely protected from pruning. Through our experimental validation we
show that SNACS is faster by over 17x the nearest comparable method and is the
state of the art single-shot pruning method across three standard Dataset-DNN
pruning benchmarks: CIFAR10-VGG16, CIFAR10-ResNet56 and ILSVRC2012-ResNet50.
Related papers
- Complexity-Aware Training of Deep Neural Networks for Optimal Structure Discovery [0.0]
We propose a novel algorithm for combined unit/filter and layer pruning of deep neural networks that functions during training and without requiring a pre-trained network to apply.
Our algorithm optimally trades-off learning accuracy and pruning levels while balancing layer vs. unit/filter pruning and computational vs. parameter complexity using only three user-defined parameters.
arXiv Detail & Related papers (2024-11-14T02:00:22Z) - Concurrent Training and Layer Pruning of Deep Neural Networks [0.0]
We propose an algorithm capable of identifying and eliminating irrelevant layers of a neural network during the early stages of training.
We employ a structure using residual connections around nonlinear network sections that allow the flow of information through the network once a nonlinear section is pruned.
arXiv Detail & Related papers (2024-06-06T23:19:57Z) - Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Robust-by-Design Classification via Unitary-Gradient Neural Networks [66.17379946402859]
The use of neural networks in safety-critical systems requires safe and robust models, due to the existence of adversarial attacks.
Knowing the minimal adversarial perturbation of any input x, or, equivalently, the distance of x from the classification boundary, allows evaluating the classification robustness, providing certifiable predictions.
A novel network architecture named Unitary-Gradient Neural Network is presented.
Experimental results show that the proposed architecture approximates a signed distance, hence allowing an online certifiable classification of x at the cost of a single inference.
arXiv Detail & Related papers (2022-09-09T13:34:51Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time
Mobile Acceleration [71.80326738527734]
We propose a general, fine-grained structured pruning scheme and corresponding compiler optimizations.
We show that our pruning scheme mapping methods, together with the general fine-grained structured pruning scheme, outperform the state-of-the-art DNN optimization framework.
arXiv Detail & Related papers (2021-11-22T23:53:14Z) - Compact representations of convolutional neural networks via weight
pruning and quantization [63.417651529192014]
We propose a novel storage format for convolutional neural networks (CNNs) based on source coding and leveraging both weight pruning and quantization.
We achieve a reduction of space occupancy up to 0.6% on fully connected layers and 5.44% on the whole network, while performing at least as competitive as the baseline.
arXiv Detail & Related papers (2021-08-28T20:39:54Z) - Layer Adaptive Node Selection in Bayesian Neural Networks: Statistical
Guarantees and Implementation Details [0.5156484100374059]
Sparse deep neural networks have proven to be efficient for predictive model building in large-scale studies.
We propose a Bayesian sparse solution using spike-and-slab Gaussian priors to allow for node selection during training.
We establish the fundamental result of variational posterior consistency together with the characterization of prior parameters.
arXiv Detail & Related papers (2021-08-25T00:48:07Z) - Only Train Once: A One-Shot Neural Network Training And Pruning
Framework [31.959625731943675]
Structured pruning is a commonly used technique in deploying deep neural networks (DNNs) onto resource-constrained devices.
We propose a framework that DNNs are slimmer with competitive performances and significant FLOPs reductions by Only-Train-Once (OTO)
OTO contains two keys: (i) we partition the parameters of DNNs into zero-invariant groups, enabling us to prune zero groups without affecting the output; and (ii) to promote zero groups, we then formulate a structured-Image optimization algorithm, Half-Space Projected (HSPG)
To demonstrate the effectiveness of OTO, we train and
arXiv Detail & Related papers (2021-07-15T17:15:20Z) - MINT: Deep Network Compression via Mutual Information-based Neuron
Trimming [32.449324736645586]
Mutual Information-based Neuron Trimming (MINT) approaches deep compression via pruning.
MINT enforces sparsity based on the strength of the relationship between filters of adjacent layers.
When pruning a network, we ensure that retained filters contribute the majority of the information towards succeeding layers.
arXiv Detail & Related papers (2020-03-18T21:05:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.