Nonlinear computations in spiking neural networks through multiplicative
synapses
- URL: http://arxiv.org/abs/2009.03857v4
- Date: Mon, 22 Nov 2021 10:41:58 GMT
- Title: Nonlinear computations in spiking neural networks through multiplicative
synapses
- Authors: Michele Nardin, James W Phillips, William F Podlaski, Sander W Keemink
- Abstract summary: nonlinear computations can be implemented successfully in spiking neural networks.
This requires supervised training and the resulting connectivity can be hard to interpret.
We show how to directly derive the required connectivity for several nonlinear dynamical systems.
- Score: 3.1498833540989413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The brain efficiently performs nonlinear computations through its intricate
networks of spiking neurons, but how this is done remains elusive. While
nonlinear computations can be implemented successfully in spiking neural
networks, this requires supervised training and the resulting connectivity can
be hard to interpret. In contrast, the required connectivity for any
computation in the form of a linear dynamical system can be directly derived
and understood with the spike coding network (SCN) framework. These networks
also have biologically realistic activity patterns and are highly robust to
cell death. Here we extend the SCN framework to directly implement any
polynomial dynamical system, without the need for training. This results in
networks requiring a mix of synapse types (fast, slow, and multiplicative),
which we term multiplicative spike coding networks (mSCNs). Using mSCNs, we
demonstrate how to directly derive the required connectivity for several
nonlinear dynamical systems. We also show how to carry out higher-order
polynomials with coupled networks that use only pair-wise multiplicative
synapses, and provide expected numbers of connections for each synapse type.
Overall, our work demonstrates a novel method for implementing nonlinear
computations in spiking neural networks, while keeping the attractive features
of standard SCNs (robustness, realistic activity patterns, and interpretable
connectivity). Finally, we discuss the biological plausibility of our approach,
and how the high accuracy and robustness of the approach may be of interest for
neuromorphic computing.
Related papers
- Time-independent Spiking Neuron via Membrane Potential Estimation for Efficient Spiking Neural Networks [4.142699381024752]
computational inefficiency of spiking neural networks (SNNs) is primarily due to the sequential updates of membrane potential.
We propose Membrane Potential Estimation Parallel Spiking Neurons (MPE-PSN), a parallel computation method for spiking neurons.
Our approach exhibits promise for enhancing computational efficiency, particularly under conditions of elevated neuron density.
arXiv Detail & Related papers (2024-09-08T05:14:22Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Biologically-Plausible Topology Improved Spiking Actor Network for Efficient Deep Reinforcement Learning [15.143466733327566]
Recent advances in neuroscience have unveiled that the human brain achieves efficient reward-based learning.
The success of Deep Reinforcement Learning (DRL) is largely attributed to utilizing Artificial Neural Networks (ANNs) as function approximators.
We propose a novel alternative for function approximator, the Biologically-Plausible Topology improved Spiking Actor Network (BPT-SAN)
arXiv Detail & Related papers (2024-03-29T13:25:19Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - An exact mathematical description of computation with transient
spatiotemporal dynamics in a complex-valued neural network [33.7054351451505]
We study a complex-valued neural network (-NN) with linear time-delayed interactions.
cv-NN displays sophisticated dynamics, including partially synchronized chimera adaptable'' states.
We demonstrate that computations in cv-NN computation are decodable by living biological neurons.
arXiv Detail & Related papers (2023-11-28T02:23:30Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Gradient-based Neuromorphic Learning on Dynamical RRAM Arrays [3.5969667977870796]
We present MEMprop, the adoption of gradient-based learning to train fully memristive spiking neural networks (MSNNs)
Our approach harnesses intrinsic device dynamics to trigger naturally arising voltage spikes.
We obtain highly competitive accuracy amongst previously reported lightweight dense fully MSNNs on several benchmarks.
arXiv Detail & Related papers (2022-06-26T23:13:34Z) - Optimal Approximation with Sparse Neural Networks and Applications [0.0]
We use deep sparsely connected neural networks to measure the complexity of a function class in $L(mathbb Rd)$.
We also introduce representation system - a countable collection of functions to guide neural networks.
We then analyse the complexity of a class called $beta$ cartoon-like functions using rate-distortion theory and wedgelets construction.
arXiv Detail & Related papers (2021-08-14T05:14:13Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.