Efficient Sparse Artificial Neural Networks
- URL: http://arxiv.org/abs/2103.07674v1
- Date: Sat, 13 Mar 2021 10:03:41 GMT
- Title: Efficient Sparse Artificial Neural Networks
- Authors: Seyed Majid Naji, Azra Abtahi, Farokh Marvasti
- Abstract summary: The brain, as the source of inspiration for Artificial Neural Networks (ANN), is based on a sparse structure.
This sparse structure helps the brain to consume less energy, learn easier and generalize patterns better than any other ANN.
In this paper, two evolutionary methods for adopting sparsity to ANNs are proposed.
- Score: 11.945854832533232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The brain, as the source of inspiration for Artificial Neural Networks (ANN),
is based on a sparse structure. This sparse structure helps the brain to
consume less energy, learn easier and generalize patterns better than any other
ANN. In this paper, two evolutionary methods for adopting sparsity to ANNs are
proposed. In the proposed methods, the sparse structure of a network as well as
the values of its parameters are trained and updated during the learning
process. The simulation results show that these two methods have better
accuracy and faster convergence while they need fewer training samples compared
to their sparse and non-sparse counterparts. Furthermore, the proposed methods
significantly improve the generalization power and reduce the number of
parameters. For example, the sparsification of the ResNet47 network by
exploiting our proposed methods for the image classification of ImageNet
dataset uses 40 % fewer parameters while the top-1 accuracy of the model
improves by 12% and 5% compared to the dense network and their sparse
counterpart, respectively. As another example, the proposed methods for the
CIFAR10 dataset converge to their final structure 7 times faster than its
sparse counterpart, while the final accuracy increases by 6%.
Related papers
- Towards Generalized Entropic Sparsification for Convolutional Neural Networks [0.0]
Convolutional neural networks (CNNs) are reported to be overparametrized.
Here, we introduce a layer-by-layer data-driven pruning method based on the mathematical idea aiming at a computationally-scalable entropic relaxation of the pruning problem.
The sparse subnetwork is found from the pre-trained (full) CNN using the network entropy minimization as a sparsity constraint.
arXiv Detail & Related papers (2024-04-06T21:33:39Z) - Breaking the Architecture Barrier: A Method for Efficient Knowledge
Transfer Across Networks [0.0]
We present a method for transferring parameters between neural networks with different architectures.
Our method, called DPIAT, uses dynamic programming to match blocks and layers between architectures and transfer parameters efficiently.
In experiments on ImageNet, our method improved validation accuracy by an average of 1.6 times after 50 epochs of training.
arXiv Detail & Related papers (2022-12-28T17:35:41Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - Boost Neural Networks by Checkpoints [9.411567653599358]
We propose a novel method to ensemble the checkpoints of deep neural networks (DNNs)
With the same training budget, our method achieves 4.16% lower error on Cifar-100 and 6.96% on Tiny-ImageNet with ResNet-110 architecture.
arXiv Detail & Related papers (2021-10-03T09:14:15Z) - Effective Model Sparsification by Scheduled Grow-and-Prune Methods [73.03533268740605]
We propose a novel scheduled grow-and-prune (GaP) methodology without pre-training the dense models.
Experiments have shown that such models can match or beat the quality of highly optimized dense models at 80% sparsity on a variety of tasks.
arXiv Detail & Related papers (2021-06-18T01:03:13Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Kernel Based Progressive Distillation for Adder Neural Networks [71.731127378807]
Adder Neural Networks (ANNs) which only contain additions bring us a new way of developing deep neural networks with low energy consumption.
There is an accuracy drop when replacing all convolution filters by adder filters.
We present a novel method for further improving the performance of ANNs without increasing the trainable parameters.
arXiv Detail & Related papers (2020-09-28T03:29:19Z) - Topological Insights into Sparse Neural Networks [16.515620374178535]
We introduce an approach to understand and compare sparse neural network topologies from the perspective of graph theory.
We first propose Neural Network Sparse Topology Distance (NNSTD) to measure the distance between different sparse neural networks.
We show that adaptive sparse connectivity can always unveil a plenitude of sparse sub-networks with very different topologies which outperform the dense model.
arXiv Detail & Related papers (2020-06-24T22:27:21Z) - Ensembled sparse-input hierarchical networks for high-dimensional
datasets [8.629912408966145]
We show that dense neural networks can be a practical data analysis tool in settings with small sample sizes.
A proposed method appropriately prunes the network structure by tuning only two L1-penalty parameters.
On a collection of real-world datasets with different sizes, EASIER-net selected network architectures in a data-adaptive manner and achieved higher prediction accuracy than off-the-shelf methods on average.
arXiv Detail & Related papers (2020-05-11T02:08:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.