Brain-inspired feature exaggeration in generative replay for continual
learning
- URL: http://arxiv.org/abs/2110.15056v1
- Date: Tue, 26 Oct 2021 10:49:02 GMT
- Title: Brain-inspired feature exaggeration in generative replay for continual
learning
- Authors: Jack Millichamp, Xi Chen
- Abstract summary: When learning new classes, the internal representation of previously learnt ones can often be overwritten.
Recent developments in neuroscience have uncovered a method through which the brain avoids its own form of memory interference.
This paper presents a new state-of-the-art performance on the classification of early classes in the class-incremental learning dataset CIFAR100.
- Score: 4.682734815593623
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The catastrophic forgetting of previously learnt classes is one of the main
obstacles to the successful development of a reliable and accurate generative
continual learning model. When learning new classes, the internal
representation of previously learnt ones can often be overwritten, resulting in
the model's "memory" of earlier classes being lost over time. Recent
developments in neuroscience have uncovered a method through which the brain
avoids its own form of memory interference. Applying a targeted exaggeration of
the differences between features of similar, yet competing memories, the brain
can more easily distinguish and recall them. In this paper, the application of
such exaggeration, via the repulsion of replayed samples belonging to competing
classes, is explored. Through the development of a 'reconstruction repulsion'
loss, this paper presents a new state-of-the-art performance on the
classification of early classes in the class-incremental learning dataset
CIFAR100.
Related papers
- Saliency-Guided Hidden Associative Replay for Continual Learning [13.551181595881326]
Continual Learning is a burgeoning domain in next-generation AI, focusing on training neural networks over a sequence of tasks akin to human learning.
This paper presents the Saliency Guided Hidden Associative Replay for Continual Learning.
This novel framework synergizes associative memory with replay-based strategies. SHARC primarily archives salient data segments via sparse memory encoding.
arXiv Detail & Related papers (2023-10-06T15:54:12Z) - Balanced Destruction-Reconstruction Dynamics for Memory-replay Class
Incremental Learning [27.117753965919025]
Class incremental learning (CIL) aims to incrementally update a trained model with the new classes of samples.
Memory-replay CIL consolidates old knowledge by replaying a small number of old classes of samples saved in the memory.
Our theoretical analysis shows that the destruction of old knowledge can be effectively alleviated by balancing the contribution of samples from the current phase and those saved in the memory.
arXiv Detail & Related papers (2023-08-03T11:33:50Z) - Retentive or Forgetful? Diving into the Knowledge Memorizing Mechanism
of Language Models [49.39276272693035]
Large-scale pre-trained language models have shown remarkable memorizing ability.
Vanilla neural networks without pre-training have been long observed suffering from the catastrophic forgetting problem.
We find that 1) Vanilla language models are forgetful; 2) Pre-training leads to retentive language models; 3) Knowledge relevance and diversification significantly influence the memory formation.
arXiv Detail & Related papers (2023-05-16T03:50:38Z) - Detachedly Learn a Classifier for Class-Incremental Learning [11.865788374587734]
We present an analysis that the failure of vanilla experience replay (ER) comes from unnecessary re-learning of previous tasks and incompetence to distinguish current task from the previous ones.
We propose a novel replay strategy task-aware experience replay.
Experimental results show our method outperforms current state-of-the-art methods.
arXiv Detail & Related papers (2023-02-23T01:35:44Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - Saliency-Augmented Memory Completion for Continual Learning [8.243137410556495]
How to forget is a problem continual learning must address.
Our paper proposes a new saliency-augmented memory completion framework for continual learning.
arXiv Detail & Related papers (2022-12-26T18:06:39Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Discriminative Distillation to Reduce Class Confusion in Continual
Learning [57.715862676788156]
Class confusion may play a role in downgrading the classification performance during continual learning.
We propose a discriminative distillation strategy to help the classify well learn the discriminative features between confusing classes.
arXiv Detail & Related papers (2021-08-11T12:46:43Z) - Preserving Earlier Knowledge in Continual Learning with the Help of All
Previous Feature Extractors [63.21036904487014]
Continual learning of new knowledge over time is one desirable capability for intelligent systems to recognize more and more classes of objects.
We propose a simple yet effective fusion mechanism by including all the previously learned feature extractors into the intelligent model.
Experiments on multiple classification tasks show that the proposed approach can effectively reduce the forgetting of old knowledge, achieving state-of-the-art continual learning performance.
arXiv Detail & Related papers (2021-04-28T07:49:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.