Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis
- URL: http://arxiv.org/abs/2104.10851v1
- Date: Thu, 22 Apr 2021 04:01:32 GMT
- Title: Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis
- Authors: Alexander Hadjiivanov
- Abstract summary: This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
- Score: 91.3755431537592
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Most classical (non-spiking) neural network models disregard internal neuron
dynamics and treat neurons as simple input integrators. However, biological
neurons have an internal state governed by complex dynamics that plays a
crucial role in learning, adaptation and the overall network activity and
behaviour. This paper presents the Membrane Potential and Activation Threshold
Homeostasis (MPATH) neuron model, which combines several biologically inspired
mechanisms to efficiently simulate internal neuron dynamics with a single
parameter analogous to the membrane time constant in biological neurons. The
model allows neurons to maintain a form of dynamic equilibrium by automatically
regulating their activity when presented with fluctuating input. One
consequence of the MPATH model is that it imbues neurons with a sense of time
without recurrent connections, paving the way for modelling processes that
depend on temporal aspects of neuron activity. Experiments demonstrate the
model's ability to adapt to and continually learn from its input.
Related papers
- Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Astrocytes as a mechanism for meta-plasticity and contextually-guided
network function [2.66269503676104]
Astrocytes are a ubiquitous and enigmatic type of non-neuronal cell.
Astrocytes may play a more direct and active role in brain function and neural computation.
arXiv Detail & Related papers (2023-11-06T20:31:01Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Self-Evolutionary Reservoir Computer Based on Kuramoto Model [1.7072337666116733]
As a biologically inspired neural network, reservoir computing (RC) has unique advantages in processing information.
We propose a structural autonomous development reservoir computing model (sad-RC), which structure can adapt to the specific problem at hand without any human expert knowledge.
arXiv Detail & Related papers (2023-01-25T15:53:39Z) - STNDT: Modeling Neural Population Activity with a Spatiotemporal
Transformer [19.329190789275565]
We introduce SpatioTemporal Neural Data Transformer (STNDT), an NDT-based architecture that explicitly models responses of individual neurons.
We show that our model achieves state-of-the-art performance on ensemble level in estimating neural activities across four neural datasets.
arXiv Detail & Related papers (2022-06-09T18:54:23Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Evolving spiking neuron cellular automata and networks to emulate in
vitro neuronal activity [0.0]
We produce spiking neural systems that emulate the patterns of behavior of biological neurons in vitro.
Our models were able to produce a level of network-wide synchrony.
The genomes of the top-performing models indicate the excitability and density of connections in the model play an important role in determining the complexity of the produced activity.
arXiv Detail & Related papers (2021-10-15T17:55:04Z) - Modeling the Nervous System as An Open Quantum System [4.590533239391236]
We propose a neural network model of multi-neuron interacting system that simulates neurons to interact each other.
We physically model the neuronal cell surroundings, including the dendrites, the axons and the synapses.
We find that this model can generate random neuron-neuron interactions and is proper to describe the process of information transmission in the nervous system physically.
arXiv Detail & Related papers (2021-03-18T10:17:09Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.