Class-Incremental Learning Using Generative Experience Replay Based on
Time-aware Regularization
- URL: http://arxiv.org/abs/2310.03898v1
- Date: Thu, 5 Oct 2023 21:07:45 GMT
- Title: Class-Incremental Learning Using Generative Experience Replay Based on
Time-aware Regularization
- Authors: Zizhao Hu, Mohammad Rostami
- Abstract summary: Generative experience replay addresses the challenge of learning new tasks accumulatively without forgetting.
We introduce a time-aware regularization method to fine-tune the three training objective terms used for generative replay.
Experimental results indicate that our method pushes the limit of brain-inspired continual learners under such strict settings.
- Score: 24.143811670210546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning new tasks accumulatively without forgetting remains a critical
challenge in continual learning. Generative experience replay addresses this
challenge by synthesizing pseudo-data points for past learned tasks and later
replaying them for concurrent training along with the new tasks' data.
Generative replay is the best strategy for continual learning under a strict
class-incremental setting when certain constraints need to be met: (i) constant
model size, (ii) no pre-training dataset, and (iii) no memory buffer for
storing past tasks' data. Inspired by the biological nervous system mechanisms,
we introduce a time-aware regularization method to dynamically fine-tune the
three training objective terms used for generative replay: supervised learning,
latent regularization, and data reconstruction. Experimental results on major
benchmarks indicate that our method pushes the limit of brain-inspired
continual learners under such strict settings, improves memory retention, and
increases the average performance over continually arriving tasks.
Related papers
- Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Replay-enhanced Continual Reinforcement Learning [37.34722105058351]
We introduce RECALL, a replay-enhanced method that greatly improves the plasticity of existing replay-based methods on new tasks.
Experiments on the Continual World benchmark show that RECALL performs significantly better than purely perfect memory replay.
arXiv Detail & Related papers (2023-11-20T06:21:52Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - AdaER: An Adaptive Experience Replay Approach for Continual Lifelong
Learning [16.457330925212606]
We present adaptive-experience replay (AdaER) to address the challenge of continual lifelong learning.
AdaER consists of two stages: memory replay and memory update.
Results: AdaER outperforms existing continual lifelong learning baselines.
arXiv Detail & Related papers (2023-08-07T01:25:45Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - Generative Feature Replay with Orthogonal Weight Modification for
Continual Learning [20.8966035274874]
generative replay is a promising strategy which generates and replays pseudo data for previous tasks to alleviate catastrophic forgetting.
We propose to replay penultimate layer feature with a generative model; 2) leverage a self-supervised auxiliary task to further enhance the stability of feature.
Empirical results on several datasets show our method always achieves substantial improvement over powerful OWM.
arXiv Detail & Related papers (2020-05-07T13:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.