An error-propagation spiking neural network compatible with neuromorphic
processors
- URL: http://arxiv.org/abs/2104.05241v1
- Date: Mon, 12 Apr 2021 07:21:08 GMT
- Title: An error-propagation spiking neural network compatible with neuromorphic
processors
- Authors: Matteo Cartiglia, Germain Haessig, Giacomo Indiveri
- Abstract summary: We present a spike-based learning method that approximates back-propagation using local weight update mechanisms.
We introduce a network architecture that enables synaptic weight update mechanisms to back-propagate error signals.
This work represents a first step towards the design of ultra-low power mixed-signal neuromorphic processing systems.
- Score: 2.432141667343098
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking neural networks have shown great promise for the design of low-power
sensory-processing and edge-computing hardware platforms. However, implementing
on-chip learning algorithms on such architectures is still an open challenge,
especially for multi-layer networks that rely on the back-propagation
algorithm. In this paper, we present a spike-based learning method that
approximates back-propagation using local weight update mechanisms and which is
compatible with mixed-signal analog/digital neuromorphic circuits. We introduce
a network architecture that enables synaptic weight update mechanisms to
back-propagate error signals across layers and present a network that can be
trained to distinguish between two spike-based patterns that have identical
mean firing rates, but different spike-timings. This work represents a first
step towards the design of ultra-low power mixed-signal neuromorphic processing
systems with on-chip learning circuits that can be trained to recognize
different spatio-temporal patterns of spiking activity (e.g. produced by
event-based vision or auditory sensors).
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous
spiking neural network processor [2.9175555050594975]
We present a brain-inspired platform for prototyping real-time event-based Spiking Neural Networks (SNNs)
The system proposed supports the direct emulation of dynamic and realistic neural processing phenomena such as short-term plasticity, NMDA gating, AMPA diffusion, homeostasis, spike frequency adaptation, conductance-based dendritic compartments and spike transmission delays.
The flexibility to emulate different biologically plausible neural networks, and the chip's ability to monitor both population and single neuron signals in real-time, allow to develop and validate complex models of neural processing for both basic research and edge-computing applications.
arXiv Detail & Related papers (2023-10-01T03:48:16Z) - Neuromorphic Auditory Perception by Neural Spiketrum [27.871072042280712]
We introduce a neural spike coding model called spiketrumtemporal, to transform the time-varying analog signals into efficient spike patterns.
The model provides a sparse and efficient coding scheme with precisely controllable spike rate that facilitates training of spiking neural networks in various auditory perception tasks.
arXiv Detail & Related papers (2023-09-11T13:06:19Z) - Centered Self-Attention Layers [89.21791761168032]
The self-attention mechanism in transformers and the message-passing mechanism in graph neural networks are repeatedly applied.
We show that this application inevitably leads to oversmoothing, i.e., to similar representations at the deeper layers.
We present a correction term to the aggregating operator of these mechanisms.
arXiv Detail & Related papers (2023-06-02T15:19:08Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Supervised training of spiking neural networks for robust deployment on
mixed-signal neuromorphic processors [2.6949002029513167]
Mixed-signal analog/digital electronic circuits can emulate spiking neurons and synapses with extremely high energy efficiency.
Mismatch is expressed as differences in effective parameters between identically-configured neurons and synapses.
We present a supervised learning approach that addresses this challenge by maximizing robustness to mismatch and other common sources of noise.
arXiv Detail & Related papers (2021-02-12T09:20:49Z) - Implicit recurrent networks: A novel approach to stationary input
processing with recurrent neural networks in deep learning [0.0]
In this work, we introduce and test a novel implementation of recurrent neural networks into deep learning.
We provide an algorithm which implements the backpropagation algorithm on a implicit implementation of recurrent networks.
A single-layer implicit recurrent network is able to solve the XOR problem, while a feed-forward network with monotonically increasing activation function fails at this task.
arXiv Detail & Related papers (2020-10-20T18:55:32Z) - Supervised Learning with First-to-Spike Decoding in Multilayer Spiking
Neural Networks [0.0]
We propose a new supervised learning method that can train multilayer spiking neural networks to solve classification problems.
The proposed learning rule supports multiple spikes fired by hidden neurons, and yet is stable by relying on firstspike responses generated by a deterministic output layer.
We also explore several distinct spike-based encoding strategies in order to form compact representations of input data.
arXiv Detail & Related papers (2020-08-16T15:34:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.