Learning by Active Forgetting for Neural Networks
- URL: http://arxiv.org/abs/2111.10831v1
- Date: Sun, 21 Nov 2021 14:55:03 GMT
- Title: Learning by Active Forgetting for Neural Networks
- Authors: Jian Peng, Xian Sun, Min Deng, Chao Tao, Bo Tang, Wenbo Li, Guohua Wu,
QingZhu, Yu Liu, Tao Lin, Haifeng Li
- Abstract summary: Remembering and forgetting mechanisms are two sides of the same coin in a human learning-memory system.
Modern machine learning systems have been working to endow machine with lifelong learning capability through better remembering.
This paper presents a learning model by active forgetting mechanism with artificial neural networks.
- Score: 36.47528616276579
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Remembering and forgetting mechanisms are two sides of the same coin in a
human learning-memory system. Inspired by human brain memory mechanisms, modern
machine learning systems have been working to endow machine with lifelong
learning capability through better remembering while pushing the forgetting as
the antagonist to overcome. Nevertheless, this idea might only see the half
picture. Up until very recently, increasing researchers argue that a brain is
born to forget, i.e., forgetting is a natural and active process for abstract,
rich, and flexible representations. This paper presents a learning model by
active forgetting mechanism with artificial neural networks. The active
forgetting mechanism (AFM) is introduced to a neural network via a
"plug-and-play" forgetting layer (P\&PF), consisting of groups of inhibitory
neurons with Internal Regulation Strategy (IRS) to adjust the extinction rate
of themselves via lateral inhibition mechanism and External Regulation Strategy
(ERS) to adjust the extinction rate of excitatory neurons via inhibition
mechanism. Experimental studies have shown that the P\&PF offers surprising
benefits: self-adaptive structure, strong generalization, long-term learning
and memory, and robustness to data and parameter perturbation. This work sheds
light on the importance of forgetting in the learning process and offers new
perspectives to understand the underlying mechanisms of neural networks.
Related papers
- Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Two-compartment neuronal spiking model expressing brain-state specific apical-amplification, -isolation and -drive regimes [0.7255608805275865]
Brain-state-specific neural mechanisms play a crucial role in integrating past and contextual knowledge with the current, incoming flow of evidence.
This work aims to provide a two-compartment spiking neuron model that incorporates features essential for supporting brain-state-specific learning.
arXiv Detail & Related papers (2023-11-10T14:16:46Z) - A Study of Biologically Plausible Neural Network: The Role and
Interactions of Brain-Inspired Mechanisms in Continual Learning [13.041607703862724]
Humans excel at continually acquiring, consolidating, and retaining information from an ever-changing environment, whereas artificial neural networks (ANNs) exhibit catastrophic forgetting.
We consider a biologically plausible framework that constitutes separate populations of exclusively excitatory and inhibitory neurons that adhere to Dale's principle.
We then conduct a comprehensive study on the role and interactions of different mechanisms inspired by the brain, including sparse non-overlapping representations, Hebbian learning, synaptic consolidation, and replay of past activations that accompanied the learning event.
arXiv Detail & Related papers (2023-04-13T16:34:12Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Control of synaptic plasticity via the fusion of reinforcement learning
and unsupervised learning in neural networks [0.0]
In cognitive neuroscience, it is widely accepted that synaptic plasticity plays an essential role in our amazing learning capability.
With this inspiration, a new learning rule is proposed via the fusion of reinforcement learning and unsupervised learning.
In the proposed computational model, the nonlinear optimal control theory is used to resemble the error feedback loop systems.
arXiv Detail & Related papers (2023-03-26T12:18:03Z) - Memory-enriched computation and learning in spiking neural networks
through Hebbian plasticity [9.453554184019108]
Hebbian plasticity is believed to play a pivotal role in biological memory.
We introduce a novel spiking neural network architecture that is enriched by Hebbian synaptic plasticity.
We show that Hebbian enrichment renders spiking neural networks surprisingly versatile in terms of their computational as well as learning capabilities.
arXiv Detail & Related papers (2022-05-23T12:48:37Z) - Cortico-cerebellar networks as decoupling neural interfaces [1.1879716317856945]
The brain solves the credit assignment problem remarkably well.
For credit to be assigned across neural networks they must, in principle, wait for specific neural computations to finish.
Deep learning methods suffer from similar locking constraints both on the forward and feedback phase.
Here we propose that a specialised brain region, the cerebellum, helps the cerebral cortex solve similar locking problems akin to DNIs.
arXiv Detail & Related papers (2021-10-21T22:02:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.