Continual Learning in the Presence of Spurious Correlation
- URL: http://arxiv.org/abs/2303.11863v1
- Date: Tue, 21 Mar 2023 14:06:12 GMT
- Title: Continual Learning in the Presence of Spurious Correlation
- Authors: Donggyu Lee, Sangwon Jung, Taesup Moon
- Abstract summary: We show that standard continual learning algorithms can transfer biases from one task to another, both forward and backward.
We propose a plug-in method for debiasing-aware continual learning, dubbed as Group-class Balanced Greedy Sampling (BGS)
- Score: 23.999136417157597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most continual learning (CL) algorithms have focused on tackling the
stability-plasticity dilemma, that is, the challenge of preventing the
forgetting of previous tasks while learning new ones. However, they have
overlooked the impact of the knowledge transfer when the dataset in a certain
task is biased - namely, when some unintended spurious correlations of the
tasks are learned from the biased dataset. In that case, how would they affect
learning future tasks or the knowledge already learned from the past tasks? In
this work, we carefully design systematic experiments using one synthetic and
two real-world datasets to answer the question from our empirical findings.
Specifically, we first show through two-task CL experiments that standard CL
methods, which are unaware of dataset bias, can transfer biases from one task
to another, both forward and backward, and this transfer is exacerbated
depending on whether the CL methods focus on the stability or the plasticity.
We then present that the bias transfer also exists and even accumulate in
longer sequences of tasks. Finally, we propose a simple, yet strong plug-in
method for debiasing-aware continual learning, dubbed as Group-class Balanced
Greedy Sampling (BGS). As a result, we show that our BGS can always reduce the
bias of a CL model, with a slight loss of CL performance at most.
Related papers
- Prior-free Balanced Replay: Uncertainty-guided Reservoir Sampling for Long-Tailed Continual Learning [8.191971407001034]
We propose a novel Prior-free Balanced Replay (PBR) framework to learn from long-tailed data stream with less forgetting.
We incorporate two prior-free components to further reduce the forgetting issue.
Our approach is evaluated on three standard long-tailed benchmarks.
arXiv Detail & Related papers (2024-08-27T11:38:01Z) - BiasPruner: Debiased Continual Learning for Medical Image Classification [20.6029805375464]
We present BiasPruner, a CL framework that intentionally forgets spurious correlations in the training data that could lead to shortcut learning.
During inference, BiasPruner employs a simple task-agnostic approach to select the best debiased subnetwork for predictions.
We conduct experiments on three medical datasets for skin lesion classification and chest X-Ray classification and demonstrate that BiasPruner consistently outperforms SOTA CL methods in terms of classification performance and fairness.
arXiv Detail & Related papers (2024-07-11T15:45:57Z) - What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights [67.72413262980272]
Severe data imbalance naturally exists among web-scale vision-language datasets.
We find CLIP pre-trained thereupon exhibits notable robustness to the data imbalance compared to supervised learning.
The robustness and discriminability of CLIP improve with more descriptive language supervision, larger data scale, and broader open-world concepts.
arXiv Detail & Related papers (2024-05-31T17:57:24Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Prior-Free Continual Learning with Unlabeled Data in the Wild [24.14279172551939]
We propose a Prior-Free Continual Learning (PFCL) method to incrementally update a trained model on new tasks.
PFCL learns new tasks without knowing the task identity or any previous data.
Our experiments show that our PFCL method significantly mitigates forgetting in all three learning scenarios.
arXiv Detail & Related papers (2023-10-16T13:59:56Z) - Progressive Learning without Forgetting [8.563323015260709]
We focus on two challenging problems in the paradigm of Continual Learning (CL)
PLwF introduces functions from previous tasks to construct a knowledge space that contains the most reliable knowledge on each task.
Credit assignment controls the tug-of-war dynamics by removing gradient conflict through projection.
In comparison with other CL methods, we report notably better results even without relying on any raw data.
arXiv Detail & Related papers (2022-11-28T10:53:14Z) - Beyond Not-Forgetting: Continual Learning with Backward Knowledge
Transfer [39.99577526417276]
In continual learning (CL) an agent can improve the learning performance of both a new task and old' tasks.
Most existing CL methods focus on addressing catastrophic forgetting in neural networks by minimizing the modification of the learnt model for old tasks.
We propose a new CL method with Backward knowlEdge tRansfer (CUBER) for a fixed capacity neural network without data replay.
arXiv Detail & Related papers (2022-11-01T23:55:51Z) - Beyond Supervised Continual Learning: a Review [69.9674326582747]
Continual Learning (CL) is a flavor of machine learning where the usual assumption of stationary data distribution is relaxed or omitted.
Changes in the data distribution can cause the so-called catastrophic forgetting (CF) effect: an abrupt loss of previous knowledge.
This article reviews literature that study CL in other settings, such as learning with reduced supervision, fully unsupervised learning, and reinforcement learning.
arXiv Detail & Related papers (2022-08-30T14:44:41Z) - When does Bias Transfer in Transfer Learning? [89.22641454588278]
Using transfer learning to adapt a pre-trained "source model" to a downstream "target task" can dramatically increase performance with seemingly no downside.
We demonstrate that there can exist a downside after all: bias transfer, or the tendency for biases of the source model to persist even after adapting the model to the target class.
arXiv Detail & Related papers (2022-07-06T17:58:07Z) - Learning Stable Classifiers by Transferring Unstable Features [59.06169363181417]
We study transfer learning in the presence of spurious correlations.
We experimentally demonstrate that directly transferring the stable feature extractor learned on the source task may not eliminate these biases for the target task.
We hypothesize that the unstable features in the source task and those in the target task are directly related.
arXiv Detail & Related papers (2021-06-15T02:41:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.