Order from chaos: Interplay of development and learning in recurrent
networks of structured neurons
- URL: http://arxiv.org/abs/2402.16763v1
- Date: Mon, 26 Feb 2024 17:30:34 GMT
- Title: Order from chaos: Interplay of development and learning in recurrent
networks of structured neurons
- Authors: Laura Kriener, Kristin V\"olk, Ben von H\"unerbein, Federico Benitez,
Walter Senn, Mihai A. Petrovici
- Abstract summary: We introduce a fully local, always-on plasticity rule to learn complex sequences in a recurrent network comprised of two populations.
Our model is resource-efficient, enabling the learning of complex sequences using only a small number of neurons.
We demonstrate these features in a mock-up of birdsong learning, in which our networks first learn a long, non-Markovian sequence.
- Score: 1.6880888629604525
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Behavior can be described as a temporal sequence of actions driven by neural
activity. To learn complex sequential patterns in neural networks, memories of
past activities need to persist on significantly longer timescales than
relaxation times of single-neuron activity. While recurrent networks can
produce such long transients, training these networks in a biologically
plausible way is challenging. One approach has been reservoir computing, where
only weights from a recurrent network to a readout are learned. Other models
achieve learning of recurrent synaptic weights using propagated errors.
However, their biological plausibility typically suffers from issues with
locality, resource allocation or parameter scales and tuning. We suggest that
many of these issues can be alleviated by considering dendritic information
storage and computation. By applying a fully local, always-on plasticity rule
we are able to learn complex sequences in a recurrent network comprised of two
populations. Importantly, our model is resource-efficient, enabling the
learning of complex sequences using only a small number of neurons. We
demonstrate these features in a mock-up of birdsong learning, in which our
networks first learn a long, non-Markovian sequence that they can then
reproduce robustly despite external disturbances.
Related papers
- How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Artificial Neuronal Ensembles with Learned Context Dependent Gating [0.0]
We introduce Learned Context Dependent Gating (LXDG), a method to flexibly allocate and recall artificial neuronal ensembles'
Activities in the hidden layers of the network are modulated by gates, which are dynamically produced during training.
We demonstrate the ability of this method to alleviate catastrophic forgetting on continual learning benchmarks.
arXiv Detail & Related papers (2023-01-17T20:52:48Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Neuromorphic Algorithm-hardware Codesign for Temporal Pattern Learning [11.781094547718595]
We derive an efficient training algorithm for Leaky Integrate and Fire neurons, which is capable of training a SNN to learn complex spatial temporal patterns.
We have developed a CMOS circuit implementation for a memristor-based network of neuron and synapses which retains critical neural dynamics with reduced complexity.
arXiv Detail & Related papers (2021-04-21T18:23:31Z) - Thinking Deeply with Recurrence: Generalizing from Easy to Hard
Sequential Reasoning Problems [51.132938969015825]
We observe that recurrent networks have the uncanny ability to closely emulate the behavior of non-recurrent deep models.
We show that recurrent networks that are trained to solve simple mazes with few recurrent steps can indeed solve much more complex problems simply by performing additional recurrences during inference.
arXiv Detail & Related papers (2021-02-22T14:09:20Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - A Deep 2-Dimensional Dynamical Spiking Neuronal Network for Temporal
Encoding trained with STDP [10.982390333064536]
We show that a large, deep layered SNN with dynamical, chaotic activity mimicking the mammalian cortex is capable of encoding information from temporal data.
We argue that the randomness inherent in the network weights allow the neurons to form groups that encode the temporal data being inputted after self-organizing with STDP.
We analyze the network in terms of network entropy as a metric of information transfer.
arXiv Detail & Related papers (2020-09-01T17:12:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.