Astrocytes mediate analogous memory in a multi-layer neuron-astrocytic
network
- URL: http://arxiv.org/abs/2108.13414v1
- Date: Tue, 31 Aug 2021 16:13:15 GMT
- Title: Astrocytes mediate analogous memory in a multi-layer neuron-astrocytic
network
- Authors: Yuliya Tsybina, Innokentiy Kastalskiy, Mikhail Krivonosov, Alexey
Zaikin, Victor Kazantsev, Alexander Gorban and Susanna Gordleeva
- Abstract summary: We show how a piece of information can be maintained as a robust activity pattern for several seconds then completely disappear if no other stimuli come.
This kind of short-term memory can keep operative information for seconds, then completely forget it to avoid overlapping with forthcoming patterns.
We show how arbitrary patterns can be loaded, then stored for a certain interval of time, and retrieved if the appropriate clue pattern is applied to the input.
- Score: 52.77024349608834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modeling the neuronal processes underlying short-term working memory remains
the focus of many theoretical studies in neuroscience. Here we propose a
mathematical model of spiking neuron network (SNN) demonstrating how a piece of
information can be maintained as a robust activity pattern for several seconds
then completely disappear if no other stimuli come. Such short-term memory
traces are preserved due to the activation of astrocytes accompanying the SNN.
The astrocytes exhibit calcium transients at a time scale of seconds. These
transients further modulate the efficiency of synaptic transmission and, hence,
the firing rate of neighboring neurons at diverse timescales through
gliotransmitter release. We show how such transients continuously encode
frequencies of neuronal discharges and provide robust short-term storage of
analogous information. This kind of short-term memory can keep operative
information for seconds, then completely forget it to avoid overlapping with
forthcoming patterns. The SNN is inter-connected with the astrocytic layer by
local inter-cellular diffusive connections. The astrocytes are activated only
when the neighboring neurons fire quite synchronously, e.g. when an information
pattern is loaded. For illustration, we took greyscale photos of people's faces
where the grey level encoded the level of applied current stimulating the
neurons. The astrocyte feedback modulates (facilitates) synaptic transmission
by varying the frequency of neuronal firing. We show how arbitrary patterns can
be loaded, then stored for a certain interval of time, and retrieved if the
appropriate clue pattern is applied to the input.
Related papers
- Astrocytes as a mechanism for meta-plasticity and contextually-guided
network function [2.66269503676104]
Astrocytes are a ubiquitous and enigmatic type of non-neuronal cell.
Astrocytes may play a more direct and active role in brain function and neural computation.
arXiv Detail & Related papers (2023-11-06T20:31:01Z) - Long Short-term Memory with Two-Compartment Spiking Neuron [64.02161577259426]
We propose a novel biologically inspired Long Short-Term Memory Leaky Integrate-and-Fire spiking neuron model, dubbed LSTM-LIF.
Our experimental results, on a diverse range of temporal classification tasks, demonstrate superior temporal classification capability, rapid training convergence, strong network generalizability, and high energy efficiency of the proposed LSTM-LIF model.
This work, therefore, opens up a myriad of opportunities for resolving challenging temporal processing tasks on emerging neuromorphic computing machines.
arXiv Detail & Related papers (2023-07-14T08:51:03Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Capturing cross-session neural population variability through
self-supervised identification of consistent neuron ensembles [1.2617078020344619]
We show that self-supervised training of a deep neural network can be used to compensate for inter-session variability.
A sequential autoencoding model can maintain state-of-the-art behaviour decoding performance for completely unseen recording sessions several days into the future.
arXiv Detail & Related papers (2022-05-19T20:00:33Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Neuromorphic Algorithm-hardware Codesign for Temporal Pattern Learning [11.781094547718595]
We derive an efficient training algorithm for Leaky Integrate and Fire neurons, which is capable of training a SNN to learn complex spatial temporal patterns.
We have developed a CMOS circuit implementation for a memristor-based network of neuron and synapses which retains critical neural dynamics with reduced complexity.
arXiv Detail & Related papers (2021-04-21T18:23:31Z) - A bio-inspired bistable recurrent cell allows for long-lasting memory [3.828689444527739]
We take inspiration from biological neuron bistability to embed RNNs with long-lasting memory at the cellular level.
This leads to the introduction of a new bistable biologically-inspired recurrent cell that is shown to strongly improve RNN performance on time-series.
arXiv Detail & Related papers (2020-06-09T13:36:31Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.