Neural Networks Reduction via Lumping
- URL: http://arxiv.org/abs/2209.07475v1
- Date: Thu, 15 Sep 2022 17:13:07 GMT
- Title: Neural Networks Reduction via Lumping
- Authors: Dalila Ressi, Riccardo Romanello, Sabina Rossi and Carla Piazza
- Abstract summary: A large number of solutions has been published to reduce both the number of operations and the parameters involved with the models.
Most of these reducing techniques are actually methods and usually require at least one re-training step to recover the accuracy.
We propose a pruning approach that reduces the number of neurons in a network without using any data or fine-tuning, while completely preserving the exact behaviour.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The increasing size of recently proposed Neural Networks makes it hard to
implement them on embedded devices, where memory, battery and computational
power are a non-trivial bottleneck. For this reason during the last years
network compression literature has been thriving and a large number of
solutions has been been published to reduce both the number of operations and
the parameters involved with the models. Unfortunately, most of these reducing
techniques are actually heuristic methods and usually require at least one
re-training step to recover the accuracy. The need of procedures for model
reduction is well-known also in the fields of Verification and Performances
Evaluation, where large efforts have been devoted to the definition of
quotients that preserve the observable underlying behaviour. In this paper we
try to bridge the gap between the most popular and very effective network
reduction strategies and formal notions, such as lumpability, introduced for
verification and evaluation of Markov Chains. Elaborating on lumpability we
propose a pruning approach that reduces the number of neurons in a network
without using any data or fine-tuning, while completely preserving the exact
behaviour. Relaxing the constraints on the exact definition of the quotienting
method we can give a formal explanation of some of the most common reduction
techniques.
Related papers
- Constraint Guided Model Quantization of Neural Networks [0.0]
Constraint Guided Model Quantization (CGMQ) is a quantization aware training algorithm that uses an upper bound on the computational resources and reduces the bit-widths of the parameters of the neural network.
It is shown on MNIST that the performance of CGMQ is competitive with state-of-the-art quantization aware training algorithms.
arXiv Detail & Related papers (2024-09-30T09:41:16Z) - Split-Boost Neural Networks [1.1549572298362787]
We propose an innovative training strategy for feed-forward architectures - called split-boost.
Such a novel approach ultimately allows us to avoid explicitly modeling the regularization term.
The proposed strategy is tested on a real-world (anonymized) dataset within a benchmark medical insurance design problem.
arXiv Detail & Related papers (2023-09-06T17:08:57Z) - On Optimizing Back-Substitution Methods for Neural Network Verification [1.4394939014120451]
We present an approach for making back-substitution produce tighter bounds.
Our technique is general, in the sense that it can be integrated into numerous existing symbolic-bound propagation techniques.
arXiv Detail & Related papers (2022-08-16T11:16:44Z) - Neural Network Pruning Through Constrained Reinforcement Learning [3.2880869992413246]
We propose a general methodology for pruning neural networks.
Our proposed methodology can prune neural networks to respect pre-defined computational budgets.
We prove the effectiveness of our approach via comparison with state-of-the-art methods on standard image classification datasets.
arXiv Detail & Related papers (2021-10-16T11:57:38Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - A Survey of Quantization Methods for Efficient Neural Network Inference [75.55159744950859]
quantization is the problem of distributing continuous real-valued numbers over a fixed discrete set of numbers to minimize the number of bits required.
It has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas.
Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x.
arXiv Detail & Related papers (2021-03-25T06:57:11Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Neural Pruning via Growing Regularization [82.9322109208353]
We extend regularization to tackle two central problems of pruning: pruning schedule and weight importance scoring.
Specifically, we propose an L2 regularization variant with rising penalty factors and show it can bring significant accuracy gains.
The proposed algorithms are easy to implement and scalable to large datasets and networks in both structured and unstructured pruning.
arXiv Detail & Related papers (2020-12-16T20:16:28Z) - MaxDropout: Deep Neural Network Regularization Based on Maximum Output
Values [0.0]
MaxDropout is a regularizer for deep neural network models that works in a supervised fashion by removing prominent neurons.
We show that it is possible to improve existing neural networks and provide better results in neural networks when Dropout is replaced by MaxDropout.
arXiv Detail & Related papers (2020-07-27T17:55:54Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.