Neuroscience inspired scientific machine learning (Part-1): Variable
spiking neuron for regression
- URL: http://arxiv.org/abs/2311.09267v1
- Date: Wed, 15 Nov 2023 08:59:06 GMT
- Title: Neuroscience inspired scientific machine learning (Part-1): Variable
spiking neuron for regression
- Authors: Shailesh Garg and Souvik Chakraborty
- Abstract summary: We introduce in this paper a novel spiking neuron, termed Variable Spiking Neuron (VSN)
It can reduce the redundant firing using lessons from biological neuron inspired Leaky Integrate and Fire Spiking Neurons (LIF-SN)
- Score: 2.1756081703276
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Redundant information transfer in a neural network can increase the
complexity of the deep learning model, thus increasing its power consumption.
We introduce in this paper a novel spiking neuron, termed Variable Spiking
Neuron (VSN), which can reduce the redundant firing using lessons from
biological neuron inspired Leaky Integrate and Fire Spiking Neurons (LIF-SN).
The proposed VSN blends LIF-SN and artificial neurons. It garners the advantage
of intermittent firing from the LIF-SN and utilizes the advantage of continuous
activation from the artificial neuron. This property of the proposed VSN makes
it suitable for regression tasks, which is a weak point for the vanilla spiking
neurons, all while keeping the energy budget low. The proposed VSN is tested
against both classification and regression tasks. The results produced advocate
favorably towards the efficacy of the proposed spiking neuron, particularly for
regression tasks.
Related papers
- Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Fast gradient-free activation maximization for neurons in spiking neural networks [5.805438104063613]
We present a framework with an efficient design for such a loop.
We track changes in the optimal stimuli for artificial neurons during training.
This formation of refined optimal stimuli is associated with an increase in classification accuracy.
arXiv Detail & Related papers (2023-12-28T18:30:13Z) - Complex Dynamic Neurons Improved Spiking Transformer Network for
Efficient Automatic Speech Recognition [8.998797644039064]
The spiking neural network (SNN) using leaky-integrated-and-fire (LIF) neurons has been commonly used in automatic speech recognition (ASR) tasks.
Here we introduce four types of neuronal dynamics to post-process the sequential patterns generated from the spiking transformer.
We found that the DyTr-SNN could handle the non-toy automatic speech recognition task well, representing a lower phoneme error rate, lower computational cost, and higher robustness.
arXiv Detail & Related papers (2023-02-02T16:20:27Z) - Neural network with optimal neuron activation functions based on
additive Gaussian process regression [0.0]
More flexible neuron activation functions would allow using fewer neurons and layers and improve expressive power.
We show that additive Gaussian process regression (GPR) can be used to construct optimal neuron activation functions that are individual to each neuron.
An approach is also introduced that avoids non-linear fitting of neural network parameters.
arXiv Detail & Related papers (2023-01-13T14:19:17Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Energy-Efficient High-Accuracy Spiking Neural Network Inference Using
Time-Domain Neurons [0.18352113484137625]
This paper presents a low-power highly linear time-domain I&F neuron circuit.
The proposed neuron leads to more than 4.3x lower error rate on the MNIST inference.
The power consumed by the proposed neuron circuit is simulated to be 0.230uW per neuron, which is orders of magnitude lower than the existing voltage-domain neurons.
arXiv Detail & Related papers (2022-02-04T08:24:03Z) - Improving Spiking Neural Network Accuracy Using Time-based Neurons [0.24366811507669117]
Research on neuromorphic computing systems based on low-power spiking neural networks using analog neurons is in the spotlight.
As technology scales down, analog neurons are difficult to scale, and they suffer from reduced voltage headroom/dynamic range and circuit nonlinearities.
This paper first models the nonlinear behavior of existing current-mirror-based voltage-domain neurons designed in a 28nm process, and show SNN inference accuracy can be severely degraded by the effect of neuron's nonlinearity.
We propose a novel neuron, which processes incoming spikes in the time domain and greatly improves the linearity, thereby improving the inference accuracy compared to the
arXiv Detail & Related papers (2022-01-05T00:24:45Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.