SpinalNet: Deep Neural Network with Gradual Input
- URL: http://arxiv.org/abs/2007.03347v3
- Date: Fri, 7 Jan 2022 05:48:48 GMT
- Title: SpinalNet: Deep Neural Network with Gradual Input
- Authors: H M Dipu Kabir, Moloud Abdar, Seyed Mohammad Jafar Jalali, Abbas
Khosravi, Amir F Atiya, Saeid Nahavandi, Dipti Srinivasan
- Abstract summary: We study the human somatosensory system and design a neural network (SpinalNet) to achieve higher accuracy with fewer computations.
In the proposed SpinalNet, each layer is split into three splits: 1) input split, 2) intermediate split, and 3) output split.
The SpinalNet can also be used as the fully connected or classification layer of DNN and supports both traditional learning and transfer learning.
- Score: 12.71050888779988
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep neural networks (DNNs) have achieved the state of the art performance in
numerous fields. However, DNNs need high computation times, and people always
expect better performance in a lower computation. Therefore, we study the human
somatosensory system and design a neural network (SpinalNet) to achieve higher
accuracy with fewer computations. Hidden layers in traditional NNs receive
inputs in the previous layer, apply activation function, and then transfer the
outcomes to the next layer. In the proposed SpinalNet, each layer is split into
three splits: 1) input split, 2) intermediate split, and 3) output split. Input
split of each layer receives a part of the inputs. The intermediate split of
each layer receives outputs of the intermediate split of the previous layer and
outputs of the input split of the current layer. The number of incoming weights
becomes significantly lower than traditional DNNs. The SpinalNet can also be
used as the fully connected or classification layer of DNN and supports both
traditional learning and transfer learning. We observe significant error
reductions with lower computational costs in most of the DNNs. Traditional
learning on the VGG-5 network with SpinalNet classification layers provided the
state-of-the-art (SOTA) performance on QMNIST, Kuzushiji-MNIST, EMNIST
(Letters, Digits, and Balanced) datasets. Traditional learning with ImageNet
pre-trained initial weights and SpinalNet classification layers provided the
SOTA performance on STL-10, Fruits 360, Bird225, and Caltech-101 datasets. The
scripts of the proposed SpinalNet are available at the following link:
https://github.com/dipuk0506/SpinalNet
Related papers
- NEAR: A Training-Free Pre-Estimator of Machine Learning Model Performance [0.0]
We propose a zero-cost proxy Network Expressivity by Activation Rank (NEAR) to identify the optimal neural network without training.
We demonstrate the cutting-edge correlation between this network score and the model accuracy on NAS-Bench-101 and NATS-Bench-SSS/TSS.
arXiv Detail & Related papers (2024-08-16T14:38:14Z) - Investigating Sparsity in Recurrent Neural Networks [0.0]
This thesis focuses on investigating the effects of pruning and Sparse Recurrent Neural Networks on the performance of RNNs.
We first describe the pruning of RNNs, its impact on the performance of RNNs, and the number of training epochs required to regain accuracy after the pruning is performed.
Next, we continue with the creation and training of Sparse Recurrent Neural Networks and identify the relation between the performance and the graph properties of its underlying arbitrary structure.
arXiv Detail & Related papers (2024-07-30T07:24:58Z) - Diffused Redundancy in Pre-trained Representations [98.55546694886819]
We take a closer look at how features are encoded in pre-trained representations.
We find that learned representations in a given layer exhibit a degree of diffuse redundancy.
Our findings shed light on the nature of representations learned by pre-trained deep neural networks.
arXiv Detail & Related papers (2023-05-31T21:00:50Z) - I-SPLIT: Deep Network Interpretability for Split Computing [11.652957867167098]
This work makes a substantial step in the field of split computing, i.e., how to split a deep neural network to host its early part on an embedded device and the rest on a server.
We show that not only the architecture of the layers does matter, but the importance of the neurons contained therein too.
arXiv Detail & Related papers (2022-09-23T14:26:56Z) - SAR Image Classification Based on Spiking Neural Network through
Spike-Time Dependent Plasticity and Gradient Descent [7.106664778883502]
Spiking neural network (SNN) is one of the core components of brain-like intelligence.
This article constructs a complete SAR image based on unsupervised and supervised learning SNN.
arXiv Detail & Related papers (2021-06-15T09:36:04Z) - Train your classifier first: Cascade Neural Networks Training from upper
layers to lower layers [54.47911829539919]
We develop a novel top-down training method which can be viewed as an algorithm for searching for high-quality classifiers.
We tested this method on automatic speech recognition (ASR) tasks and language modelling tasks.
The proposed method consistently improves recurrent neural network ASR models on Wall Street Journal, self-attention ASR models on Switchboard, and AWD-LSTM language models on WikiText-2.
arXiv Detail & Related papers (2021-02-09T08:19:49Z) - Pooling Methods in Deep Neural Networks, a Review [6.1678491628787455]
pooling layer is an important layer that executes the down-sampling on the feature maps coming from the previous layer.
In this paper, we reviewed some of the famous and useful pooling methods.
arXiv Detail & Related papers (2020-09-16T06:11:40Z) - Exploiting Heterogeneity in Operational Neural Networks by Synaptic
Plasticity [87.32169414230822]
Recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs)
In this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons.
Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs.
arXiv Detail & Related papers (2020-08-21T19:03:23Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Towards Understanding Hierarchical Learning: Benefits of Neural
Representations [160.33479656108926]
In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks.
We show that neural representation can achieve improved sample complexities compared with the raw input.
Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
arXiv Detail & Related papers (2020-06-24T02:44:54Z) - Convolutional Networks with Dense Connectivity [59.30634544498946]
We introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion.
For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers.
We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks.
arXiv Detail & Related papers (2020-01-08T06:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.