Spiking neural network for nonlinear regression
- URL: http://arxiv.org/abs/2210.03515v1
- Date: Thu, 6 Oct 2022 13:04:45 GMT
- Title: Spiking neural network for nonlinear regression
- Authors: Alexander Henkes, Jason K. Eshraghian, Henning Wessels
- Abstract summary: Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
- Score: 68.8204255655161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking neural networks, also often referred to as the third generation of
neural networks, carry the potential for a massive reduction in memory and
energy consumption over traditional, second-generation neural networks.
Inspired by the undisputed efficiency of the human brain, they introduce
temporal and neuronal sparsity, which can be exploited by next-generation
neuromorphic hardware. To open the pathway toward engineering applications, we
introduce this exciting technology in the context of continuum mechanics.
However, the nature of spiking neural networks poses a challenge for regression
problems, which frequently arise in the modeling of engineering sciences. To
overcome this problem, a framework for regression using spiking neural networks
is proposed. In particular, a network topology for decoding binary spike trains
to real numbers is introduced, utilizing the membrane potential of spiking
neurons. As the aim of this contribution is a concise introduction to this new
methodology, several different spiking neural architectures, ranging from
simple spiking feed-forward to complex spiking long short-term memory neural
networks, are derived. Several numerical experiments directed towards
regression of linear and nonlinear, history-dependent material models are
carried out. A direct comparison with counterparts of traditional neural
networks shows that the proposed framework is much more efficient while
retaining precision and generalizability. All code has been made publicly
available in the interest of reproducibility and to promote continued
enhancement in this new domain.
Related papers
- Message Passing Variational Autoregressive Network for Solving Intractable Ising Models [6.261096199903392]
Many deep neural networks have been used to solve Ising models, including autoregressive neural networks, convolutional neural networks, recurrent neural networks, and graph neural networks.
Here we propose a variational autoregressive architecture with a message passing mechanism, which can effectively utilize the interactions between spin variables.
The new network trained under an annealing framework outperforms existing methods in solving several prototypical Ising spin Hamiltonians, especially for larger spin systems at low temperatures.
arXiv Detail & Related papers (2024-04-09T11:27:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Expressivity of Spiking Neural Networks [15.181458163440634]
We study the capabilities of spiking neural networks where information is encoded in the firing time of neurons.
In contrast to ReLU networks, we prove that spiking neural networks can realize both continuous and discontinuous functions.
arXiv Detail & Related papers (2023-08-16T08:45:53Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Spike-based computation using classical recurrent neural networks [1.9171404264679484]
Spiking neural networks are artificial neural networks in which communication between neurons is only made of events, also called spikes.
We modify the dynamics of a well-known, easily trainable type of recurrent neural network to make it event-based.
We show that this new network can achieve performance comparable to other types of spiking networks in the MNIST benchmark.
arXiv Detail & Related papers (2023-06-06T12:19:12Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Training Deep Spiking Auto-encoders without Bursting or Dying Neurons
through Regularization [9.34612743192798]
Spiking neural networks are a promising approach towards next-generation models of the brain in computational neuroscience.
We apply end-to-end learning with membrane potential-based backpropagation to a spiking convolutional auto-encoder.
We show that applying regularization on membrane potential and spiking output successfully avoids both dead and bursting neurons.
arXiv Detail & Related papers (2021-09-22T21:27:40Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Effective and Efficient Computation with Multiple-timescale Spiking
Recurrent Neural Networks [0.9790524827475205]
We show how a novel type of adaptive spiking recurrent neural network (SRNN) is able to achieve state-of-the-art performance.
We calculate a $>$100x energy improvement for our SRNNs over classical RNNs on the harder tasks.
arXiv Detail & Related papers (2020-05-24T01:04:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.