A bio-inspired bistable recurrent cell allows for long-lasting memory
- URL: http://arxiv.org/abs/2006.05252v1
- Date: Tue, 9 Jun 2020 13:36:31 GMT
- Title: A bio-inspired bistable recurrent cell allows for long-lasting memory
- Authors: Nicolas Vecoven and Damien Ernst and Guillaume Drion
- Abstract summary: We take inspiration from biological neuron bistability to embed RNNs with long-lasting memory at the cellular level.
This leads to the introduction of a new bistable biologically-inspired recurrent cell that is shown to strongly improve RNN performance on time-series.
- Score: 3.828689444527739
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recurrent neural networks (RNNs) provide state-of-the-art performances in a
wide variety of tasks that require memory. These performances can often be
achieved thanks to gated recurrent cells such as gated recurrent units (GRU)
and long short-term memory (LSTM). Standard gated cells share a layer internal
state to store information at the network level, and long term memory is shaped
by network-wide recurrent connection weights. Biological neurons on the other
hand are capable of holding information at the cellular level for an arbitrary
long amount of time through a process called bistability. Through bistability,
cells can stabilize to different stable states depending on their own past
state and inputs, which permits the durable storing of past information in
neuron state. In this work, we take inspiration from biological neuron
bistability to embed RNNs with long-lasting memory at the cellular level. This
leads to the introduction of a new bistable biologically-inspired recurrent
cell that is shown to strongly improves RNN performance on time-series which
require very long memory, despite using only cellular connections (all
recurrent connections are from neurons to themselves, i.e. a neuron state is
not influenced by the state of other neurons). Furthermore, equipping this cell
with recurrent neuromodulation permits to link them to standard GRU cells,
taking a step towards the biological plausibility of GRU.
Related papers
- The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Artificial Neuronal Ensembles with Learned Context Dependent Gating [0.0]
We introduce Learned Context Dependent Gating (LXDG), a method to flexibly allocate and recall artificial neuronal ensembles'
Activities in the hidden layers of the network are modulated by gates, which are dynamically produced during training.
We demonstrate the ability of this method to alleviate catastrophic forgetting on continual learning benchmarks.
arXiv Detail & Related papers (2023-01-17T20:52:48Z) - Mesoscopic modeling of hidden spiking neurons [3.6868085124383616]
We use coarse-graining and mean-field approximations to derive a bottom-up, neuronally-grounded latent variable model (neuLVM)
neuLVM can be explicitly mapped to a recurrent, multi-population spiking neural network (SNN)
We show, on synthetic spike trains, that a few observed neurons are sufficient for neuLVM to perform efficient model inversion of large SNNs.
arXiv Detail & Related papers (2022-05-26T17:04:39Z) - Working Memory Connections for LSTM [51.742526187978726]
We show that Working Memory Connections constantly improve the performance of LSTMs on a variety of tasks.
Numerical results suggest that the cell state contains useful information that is worth including in the gate structure.
arXiv Detail & Related papers (2021-08-31T18:01:30Z) - Astrocytes mediate analogous memory in a multi-layer neuron-astrocytic
network [52.77024349608834]
We show how a piece of information can be maintained as a robust activity pattern for several seconds then completely disappear if no other stimuli come.
This kind of short-term memory can keep operative information for seconds, then completely forget it to avoid overlapping with forthcoming patterns.
We show how arbitrary patterns can be loaded, then stored for a certain interval of time, and retrieved if the appropriate clue pattern is applied to the input.
arXiv Detail & Related papers (2021-08-31T16:13:15Z) - Towards self-organized control: Using neural cellular automata to
robustly control a cart-pole agent [62.997667081978825]
We use neural cellular automata to control a cart-pole agent.
We trained the model using deep-Q learning, where the states of the output cells were used as the Q-value estimates to be optimized.
arXiv Detail & Related papers (2021-06-29T10:49:42Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.