Plasticity Neural Network Based on Astrocytic Influence at Critical
Periods, Synaptic Competition and Compensation by Current and Mnemonic Brain
Plasticity and Synapse Formation
- URL: http://arxiv.org/abs/2203.11740v1
- Date: Sat, 19 Mar 2022 14:38:54 GMT
- Title: Plasticity Neural Network Based on Astrocytic Influence at Critical
Periods, Synaptic Competition and Compensation by Current and Mnemonic Brain
Plasticity and Synapse Formation
- Authors: Jun-Bo Tao, Bai-Qing Sun, Wei-Dong Zhu, Shi-You Qu, Ling-Kun Chen,
Jia-Qiang Li, Chong Wu, Yu Xiong, Jiaxuan Zhou
- Abstract summary: Based on the RNN frame, we accomplished the model construction, formula derivation and algorithm testing for PNN.
The question we proposed is whether the promotion of neuroscience and brain cognition was achieved by model construction, formula derivation or algorithm testing.
- Score: 7.8787868286474
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Based on the RNN frame, we accomplished the model construction, formula
derivation and algorithm testing for PNN. We elucidated the mechanism of PNN
based on the latest MIT research on synaptic compensation, and also grounded
our study on the basis of findings of the Stanford research, which suggested
that synapse formation is important for competition in dendrite morphogenesis.
The influence of astrocytic impacts on brain plasticity and synapse formation
is an important mechanism of our Neural Network at critical periods or the end
of critical periods.In the model for critical periods, the hypothesis is that
the best brain plasticity so far affects current brain plasticity and the best
synapse formation so far affects current synapse formation.Furthermore, PNN
takes into account the mnemonic gradient informational synapse formation, and
brain plasticity and synapse formation change frame of NN is a new method of
Deep Learning.The question we proposed is whether the promotion of neuroscience
and brain cognition was achieved by model construction, formula derivation or
algorithm testing. We resorted to the Artificial Neural Network (ANN),
evolutionary computation and other numerical methods for hypotheses, possible
explanations and rules, rather than only biological tests which include
cutting-edge imaging and genetic tools.And it has no ethics of animal testing.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Neural Dynamics Model of Visual Decision-Making: Learning from Human Experts [28.340344705437758]
We implement a comprehensive visual decision-making model that spans from visual input to behavioral output.
Our model aligns closely with human behavior and reflects neural activities in primates.
A neuroimaging-informed fine-tuning approach was introduced and applied to the model, leading to performance improvements.
arXiv Detail & Related papers (2024-09-04T02:38:52Z) - Contribute to balance, wire in accordance: Emergence of backpropagation from a simple, bio-plausible neuroplasticity rule [0.0]
We introduce a novel neuroplasticity rule that offers a potential mechanism for implementing BP in the brain.
We demonstrate mathematically that our learning rule precisely replicates BP in layered neural networks without any approximations.
arXiv Detail & Related papers (2024-05-23T03:28:52Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Control of synaptic plasticity in neural networks [0.0]
The brain is a nonlinear and highly Recurrent Neural Network (RNN)
The proposed framework involves a new NN-based actor-critic method which is used to simulate the error feedback loop systems.
arXiv Detail & Related papers (2023-03-10T13:36:31Z) - Modeling Associative Plasticity between Synapses to Enhance Learning of
Spiking Neural Networks [4.736525128377909]
Spiking Neural Networks (SNNs) are the third generation of artificial neural networks that enable energy-efficient implementation on neuromorphic hardware.
We propose a robust and effective learning mechanism by modeling the associative plasticity between synapses.
Our approaches achieve superior performance on static and state-of-the-art neuromorphic datasets.
arXiv Detail & Related papers (2022-07-24T06:12:23Z) - Neural Language Models are not Born Equal to Fit Brain Data, but
Training Helps [75.84770193489639]
We examine the impact of test loss, training corpus and model architecture on the prediction of functional Magnetic Resonance Imaging timecourses of participants listening to an audiobook.
We find that untrained versions of each model already explain significant amount of signal in the brain by capturing similarity in brain responses across identical words.
We suggest good practices for future studies aiming at explaining the human language system using neural language models.
arXiv Detail & Related papers (2022-07-07T15:37:17Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - SpikePropamine: Differentiable Plasticity in Spiking Neural Networks [0.0]
We introduce a framework for learning the dynamics of synaptic plasticity and neuromodulated synaptic plasticity in Spiking Neural Networks (SNNs)
We show that SNNs augmented with differentiable plasticity are sufficient for solving a set of challenging temporal learning tasks.
These networks are also shown to be capable of producing locomotion on a high-dimensional robotic learning task.
arXiv Detail & Related papers (2021-06-04T19:29:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.