Association: Remind Your GAN not to Forget
- URL: http://arxiv.org/abs/2011.13553v2
- Date: Thu, 25 Mar 2021 09:35:00 GMT
- Title: Association: Remind Your GAN not to Forget
- Authors: Yi Gu, Jie Li, Yuting Gao, Ruoxin Chen, Chentao Wu, Feiyang Cai, Chao
Wang, Zirui Zhang
- Abstract summary: We propose a brain-like approach that imitates the associative learning process to achieve continual learning.
Experiments demonstrate the effectiveness of our method in alleviating catastrophic forgetting on image-to-image translation tasks.
- Score: 11.653696510515807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are susceptible to catastrophic forgetting. They fail to
preserve previously acquired knowledge when adapting to new tasks. Inspired by
human associative memory system, we propose a brain-like approach that imitates
the associative learning process to achieve continual learning. We design a
heuristics mechanism to potentiatively stimulate the model, which guides the
model to recall the historical episodes based on the current circumstance and
obtained association experience. Besides, a distillation measure is added to
depressively alter the efficacy of synaptic transmission, which dampens the
feature reconstruction learning for new task. The framework is mediated by
potentiation and depression stimulation that play opposing roles in directing
synaptic and behavioral plasticity. It requires no access to the original data
and is more similar to human cognitive process. Experiments demonstrate the
effectiveness of our method in alleviating catastrophic forgetting on
image-to-image translation tasks.
Related papers
- TACOS: Task Agnostic Continual Learning in Spiking Neural Networks [1.703671463296347]
Catastrophic interference, the loss of previously learned information when learning new information, remains a major challenge in machine learning.
We show that neuro-inspired mechanisms such as synaptic consolidation and metaplasticity can mitigate catastrophic interference in a spiking neural network.
Our model, TACOS, combines neuromodulation with complex synaptic dynamics to enable new learning while protecting previous information.
arXiv Detail & Related papers (2024-08-16T15:42:16Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Wake-Sleep Consolidated Learning [9.596781985154927]
We propose Wake-Sleep Consolidated Learning to improve the performance of deep neural networks for visual classification tasks.
Our method learns continually via the synchronization between distinct wake and sleep phases.
We evaluate the effectiveness of our approach on three benchmark datasets.
arXiv Detail & Related papers (2023-12-06T18:15:08Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Continual learning benefits from multiple sleep mechanisms: NREM, REM,
and Synaptic Downscaling [51.316408685035526]
Learning new tasks and skills in succession without losing prior learning is a computational challenge for both artificial and biological neural networks.
Here, we investigate how modeling three distinct components of mammalian sleep together affects continual learning in artificial neural networks.
arXiv Detail & Related papers (2022-09-09T13:45:27Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Learning by Active Forgetting for Neural Networks [36.47528616276579]
Remembering and forgetting mechanisms are two sides of the same coin in a human learning-memory system.
Modern machine learning systems have been working to endow machine with lifelong learning capability through better remembering.
This paper presents a learning model by active forgetting mechanism with artificial neural networks.
arXiv Detail & Related papers (2021-11-21T14:55:03Z) - AFEC: Active Forgetting of Negative Transfer in Continual Learning [37.03139674884091]
We show that biological neural networks can actively forget the old knowledge that conflicts with the learning of a new experience.
Inspired by the biological active forgetting, we propose to actively forget the old knowledge that limits the learning of new tasks to benefit continual learning.
arXiv Detail & Related papers (2021-10-23T10:03:19Z) - Learning offline: memory replay in biological and artificial
reinforcement learning [1.0136215038345011]
We review the functional roles of replay in the fields of neuroscience and AI.
Replay is important for memory consolidation in biological neural networks.
It is also key to stabilising learning in deep neural networks.
arXiv Detail & Related papers (2021-09-21T08:57:19Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.