Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay
- URL: http://arxiv.org/abs/2112.04728v1
- Date: Thu, 9 Dec 2021 07:11:14 GMT
- Title: Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay
- Authors: Hitesh Vaidya, Travis Desell, and Alexander Ororbia
- Abstract summary: A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
- Score: 67.50637511633212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A lifelong learning agent is able to continually learn from potentially
infinite streams of pattern sensory data. One major historic difficulty in
building agents that adapt in this way is that neural systems struggle to
retain previously-acquired knowledge when learning from new samples. This
problem is known as catastrophic forgetting (interference) and remains an
unsolved problem in the domain of machine learning to this day. While
forgetting in the context of feedforward networks has been examined extensively
over the decades, far less has been done in the context of alternative
architectures such as the venerable self-organizing map (SOM), an unsupervised
neural model that is often used in tasks such as clustering and dimensionality
reduction. Although the competition among its internal neurons might carry the
potential to improve memory retention, we observe that a fixed-sized SOM
trained on task incremental data, i.e., it receives data points related to
specific classes at certain temporal increments, experiences significant
forgetting. In this study, we propose the continual SOM (c-SOM), a model that
is capable of reducing its own forgetting when processing information.
Related papers
- Neuromimetic metaplasticity for adaptive continual learning [2.1749194587826026]
We propose a metaplasticity model inspired by human working memory to achieve catastrophic forgetting-free continual learning.
A key aspect of our approach involves implementing distinct types of synapses from stable to flexible, and randomly intermixing them to train synaptic connections with different degrees of flexibility.
The model achieved a balanced tradeoff between memory capacity and performance without requiring additional training or structural modifications.
arXiv Detail & Related papers (2024-07-09T12:21:35Z) - ELiSe: Efficient Learning of Sequences in Structured Recurrent Networks [1.5931140598271163]
We build a model for efficient learning sequences using only local always-on and phase-free plasticity.
We showcase the capabilities of ELiSe in a mock-up of birdsong learning, and demonstrate its flexibility with respect to parametrization.
arXiv Detail & Related papers (2024-02-26T17:30:34Z) - Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps [56.827895559823126]
Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
arXiv Detail & Related papers (2024-02-19T19:11:22Z) - Long Short-term Memory with Two-Compartment Spiking Neuron [64.02161577259426]
We propose a novel biologically inspired Long Short-Term Memory Leaky Integrate-and-Fire spiking neuron model, dubbed LSTM-LIF.
Our experimental results, on a diverse range of temporal classification tasks, demonstrate superior temporal classification capability, rapid training convergence, strong network generalizability, and high energy efficiency of the proposed LSTM-LIF model.
This work, therefore, opens up a myriad of opportunities for resolving challenging temporal processing tasks on emerging neuromorphic computing machines.
arXiv Detail & Related papers (2023-07-14T08:51:03Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Understanding Catastrophic Forgetting and Remembering in Continual
Learning with Optimal Relevance Mapping [10.970706194360451]
Catastrophic forgetting in neural networks is a significant problem for continual learning.
We introduce Relevance Mapping Networks (RMNs) which are inspired by the Optimal Overlap Hypothesis.
We show that RMNs learn an optimized representational overlap that overcomes the twin problem of catastrophic forgetting and remembering.
arXiv Detail & Related papers (2021-02-22T20:34:00Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Triple Memory Networks: a Brain-Inspired Method for Continual Learning [35.40452724755021]
A neural network adjusts its parameters when learning a new task, but then fails to conduct the old tasks well.
The brain has a powerful ability to continually learn new experience without catastrophic interference.
Inspired by such brain strategy, we propose a novel approach named triple memory networks (TMNs) for continual learning.
arXiv Detail & Related papers (2020-03-06T11:35:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.