Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning
- URL: http://arxiv.org/abs/2401.06548v1
- Date: Fri, 12 Jan 2024 12:51:12 GMT
- Title: Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning
- Authors: Chenyang Wang, Junjun Jiang, Xingyu Hu, Xianming Liu, Xiangyang Ji
- Abstract summary: Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
- Score: 100.7407460674153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning systems are prone to catastrophic forgetting when learning from
a sequence of tasks, where old data from experienced tasks is unavailable when
learning from a new task. To mitigate the problem, a line of methods propose to
replay the data of experienced tasks when learning new tasks. These methods
usually adopt an extra memory to store the data for replay. However, it is not
expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting
samples from the classification model. Though achieving good results, these
methods still suffer from the inconsistency of the inverted and real training
data, which is neglected in the inversion stage in recent works. To that
effect, we propose to measure the data consistency quantitatively by some
simplification and assumptions. Using the measurement, we analyze existing
techniques for inverting samples and get some insightful information that
inspires a novel loss function to reduce the inconsistency. Specifically, the
loss minimizes the KL divergence of the distributions of inverted and real data
under the tied multivariate Gaussian assumption, which is easy to implement in
continual learning. In addition, we observe that the norms of old class weights
turn to decrease continually as learning progresses. We thus analyze the
underlying reasons and propose a simple regularization term to balance the
class weights so that the samples of old classes are more distinguishable. To
conclude, we propose the Consistency enhanced data replay with debiased
classifier for Class Incremental Learning (CCIL). Extensive experiments on
CIFAR-100, Tiny-ImageNet, and ImageNet100 show consistently improved
performance of CCIL compared to previous approaches.
Related papers
- Reducing Catastrophic Forgetting in Online Class Incremental Learning Using Self-Distillation [3.8506666685467343]
In continual learning, previous knowledge is forgotten when a model learns new tasks.
In this paper, we tried to solve this problem by acquiring transferable knowledge through self-distillation.
Our proposed method outperformed conventional methods by experiments on CIFAR10, CIFAR100, and MiniimageNet datasets.
arXiv Detail & Related papers (2024-09-17T16:26:33Z) - Prior-Free Continual Learning with Unlabeled Data in the Wild [24.14279172551939]
We propose a Prior-Free Continual Learning (PFCL) method to incrementally update a trained model on new tasks.
PFCL learns new tasks without knowing the task identity or any previous data.
Our experiments show that our PFCL method significantly mitigates forgetting in all three learning scenarios.
arXiv Detail & Related papers (2023-10-16T13:59:56Z) - Prototype-Sample Relation Distillation: Towards Replay-Free Continual
Learning [14.462797749666992]
We propose a holistic approach to jointly learn the representation and class prototypes.
We propose a novel distillation loss that constrains class prototypes to maintain relative similarities as compared to new task data.
This method yields state-of-the-art performance in the task-incremental setting.
arXiv Detail & Related papers (2023-03-26T16:35:45Z) - Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free
Replay [52.251188477192336]
Few-shot class-incremental learning (FSCIL) has been proposed aiming to enable a deep learning system to incrementally learn new classes with limited data.
We show through empirical results that adopting the data replay is surprisingly favorable.
We propose using data-free replay that can synthesize data by a generator without accessing real data.
arXiv Detail & Related papers (2022-07-22T17:30:51Z) - New Insights on Reducing Abrupt Representation Change in Online
Continual Learning [69.05515249097208]
We focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream.
We show that applying Experience Replay causes the newly added classes' representations to overlap significantly with the previous classes.
We propose a new method which mitigates this issue by shielding the learned representations from drastic adaptation to accommodate new classes.
arXiv Detail & Related papers (2022-03-08T01:37:00Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Continual Learning for Fake Audio Detection [62.54860236190694]
This paper proposes detecting fake without forgetting, a continual-learning-based method, to make the model learn new spoofing attacks incrementally.
Experiments are conducted on the ASVspoof 2019 dataset.
arXiv Detail & Related papers (2021-04-15T07:57:05Z) - Reducing Representation Drift in Online Continual Learning [87.71558506591937]
We study the online continual learning paradigm, where agents must learn from a changing distribution with constrained memory and compute.
In this work we instead focus on the change in representations of previously observed data due to the introduction of previously unobserved class samples in the incoming data stream.
arXiv Detail & Related papers (2021-04-11T15:19:30Z) - Generative Feature Replay with Orthogonal Weight Modification for
Continual Learning [20.8966035274874]
generative replay is a promising strategy which generates and replays pseudo data for previous tasks to alleviate catastrophic forgetting.
We propose to replay penultimate layer feature with a generative model; 2) leverage a self-supervised auxiliary task to further enhance the stability of feature.
Empirical results on several datasets show our method always achieves substantial improvement over powerful OWM.
arXiv Detail & Related papers (2020-05-07T13:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.