Memory via Temporal Delays in weightless Spiking Neural Network
- URL: http://arxiv.org/abs/2202.07132v1
- Date: Tue, 15 Feb 2022 02:09:33 GMT
- Title: Memory via Temporal Delays in weightless Spiking Neural Network
- Authors: Hananel Hazan, Simon Caby, Christopher Earl, Hava Siegelmann, Michael
Levin
- Abstract summary: We present a prototype for weightless spiking neural networks that can perform a simple classification task.
The memory in this network is stored in the timing between neurons, rather than the strength of the connection.
- Score: 0.08399688944263842
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A common view in the neuroscience community is that memory is encoded in the
connection strength between neurons. This perception led artificial neural
network models to focus on connection weights as the key variables to modulate
learning. In this paper, we present a prototype for weightless spiking neural
networks that can perform a simple classification task. The memory in this
network is stored in the timing between neurons, rather than the strength of
the connection, and is trained using a Hebbian Spike Timing Dependent
Plasticity (STDP), which modulates the delays of the connection.
Related papers
- Maelstrom Networks [19.33916380545711]
We offer an alternative paradigm that combines the strength of recurrent networks, with the pattern matching capability of feed-forward neural networks.
This allows the network to leverage the strength of feed-forward training without unrolling the network.
It endows a neural network with a sequential memory that takes advantage of the inductive bias that data is organized causally in the temporal domain.
arXiv Detail & Related papers (2024-08-29T15:39:04Z) - Spiking representation learning for associative memories [0.0]
We introduce a novel artificial spiking neural network (SNN) that performs unsupervised representation learning and associative memory operations.
The architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories.
arXiv Detail & Related papers (2024-06-05T08:30:11Z) - Expanding memory in recurrent spiking networks [2.8237889121096034]
Recurrent spiking neural networks (RSNNs) are notoriously difficult to train because of the vanishing gradient problem that is enhanced by the binary nature of the spikes.
We present a novel spiking neural network that circumvents these limitations.
arXiv Detail & Related papers (2023-10-29T16:46:26Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Spike-based computation using classical recurrent neural networks [1.9171404264679484]
Spiking neural networks are artificial neural networks in which communication between neurons is only made of events, also called spikes.
We modify the dynamics of a well-known, easily trainable type of recurrent neural network to make it event-based.
We show that this new network can achieve performance comparable to other types of spiking networks in the MNIST benchmark.
arXiv Detail & Related papers (2023-06-06T12:19:12Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Cross-Frequency Coupling Increases Memory Capacity in Oscillatory Neural
Networks [69.42260428921436]
Cross-frequency coupling (CFC) is associated with information integration across populations of neurons.
We construct a model of CFC which predicts a computational role for observed $theta - gamma$ oscillatory circuits in the hippocampus and cortex.
We show that the presence of CFC increases the memory capacity of a population of neurons connected by plastic synapses.
arXiv Detail & Related papers (2022-04-05T17:13:36Z) - Astrocytes mediate analogous memory in a multi-layer neuron-astrocytic
network [52.77024349608834]
We show how a piece of information can be maintained as a robust activity pattern for several seconds then completely disappear if no other stimuli come.
This kind of short-term memory can keep operative information for seconds, then completely forget it to avoid overlapping with forthcoming patterns.
We show how arbitrary patterns can be loaded, then stored for a certain interval of time, and retrieved if the appropriate clue pattern is applied to the input.
arXiv Detail & Related papers (2021-08-31T16:13:15Z) - Hierarchical Associative Memory [2.66512000865131]
Associative Memories or Modern Hopfield Networks have many appealing properties.
They can do pattern completion, store a large number of memories, and can be described using a recurrent neural network.
This paper tackles a gap and describes a fully recurrent model of associative memory with an arbitrary large number of layers.
arXiv Detail & Related papers (2021-07-14T01:38:40Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.