A Neural Network Model of Continual Learning with Cognitive Control
- URL: http://arxiv.org/abs/2202.04773v1
- Date: Wed, 9 Feb 2022 23:53:05 GMT
- Title: A Neural Network Model of Continual Learning with Cognitive Control
- Authors: Jacob Russin, Maryam Zolfaghar, Seongmin A. Park, Erie Boorman,
Randall C. O'Reilly
- Abstract summary: We show that neural networks equipped with a mechanism for cognitive control do not exhibit catastrophic forgetting when trials are blocked.
We further show an advantage of blocking over interleaving when there is a bias for active maintenance in the control signal.
Our work highlights the potential of cognitive control to aid continual learning in neural networks, and offers an explanation for the advantage of blocking that has been observed in humans.
- Score: 1.8051006704301769
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks struggle in continual learning settings from catastrophic
forgetting: when trials are blocked, new learning can overwrite the learning
from previous blocks. Humans learn effectively in these settings, in some cases
even showing an advantage of blocking, suggesting the brain contains mechanisms
to overcome this problem. Here, we build on previous work and show that neural
networks equipped with a mechanism for cognitive control do not exhibit
catastrophic forgetting when trials are blocked. We further show an advantage
of blocking over interleaving when there is a bias for active maintenance in
the control signal, implying a tradeoff between maintenance and the strength of
control. Analyses of map-like representations learned by the networks provided
additional insights into these mechanisms. Our work highlights the potential of
cognitive control to aid continual learning in neural networks, and offers an
explanation for the advantage of blocking that has been observed in humans.
Related papers
- Avoiding Catastrophic Forgetting in Visual Classification Using Human
Concept Formation [0.8159711103888622]
We propose Cobweb4V, a novel visual classification approach that builds on Cobweb, a human like learning system.
In this research, we conduct a comprehensive evaluation, showcasing the proficiency of Cobweb4V in learning visual concepts.
These characteristics align with learning strategies in human cognition, positioning Cobweb4V as a promising alternative to neural network approaches.
arXiv Detail & Related papers (2024-02-26T17:20:16Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Learning by Active Forgetting for Neural Networks [36.47528616276579]
Remembering and forgetting mechanisms are two sides of the same coin in a human learning-memory system.
Modern machine learning systems have been working to endow machine with lifelong learning capability through better remembering.
This paper presents a learning model by active forgetting mechanism with artificial neural networks.
arXiv Detail & Related papers (2021-11-21T14:55:03Z) - Wide Neural Networks Forget Less Catastrophically [39.907197907411266]
We study the impact of "width" of the neural network architecture on catastrophic forgetting.
We study the learning dynamics of the network from various perspectives.
arXiv Detail & Related papers (2021-10-21T23:49:23Z) - Cortico-cerebellar networks as decoupling neural interfaces [1.1879716317856945]
The brain solves the credit assignment problem remarkably well.
For credit to be assigned across neural networks they must, in principle, wait for specific neural computations to finish.
Deep learning methods suffer from similar locking constraints both on the forward and feedback phase.
Here we propose that a specialised brain region, the cerebellum, helps the cerebral cortex solve similar locking problems akin to DNIs.
arXiv Detail & Related papers (2021-10-21T22:02:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.