The Importance of Robust Features in Mitigating Catastrophic Forgetting
- URL: http://arxiv.org/abs/2306.17091v1
- Date: Thu, 29 Jun 2023 16:48:15 GMT
- Title: The Importance of Robust Features in Mitigating Catastrophic Forgetting
- Authors: Hikmat Khan, Nidhal C. Bouaynaya, Ghulam Rasoom
- Abstract summary: We introduce the CL robust dataset and train four baseline models on both the standard and CL robust datasets.
Our results demonstrate that the CL models trained on the CL robust dataset experienced less catastrophic forgetting of the previously learned tasks than when trained on the standard dataset.
- Score: 0.7734726150561088
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual learning (CL) is an approach to address catastrophic forgetting,
which refers to forgetting previously learned knowledge by neural networks when
trained on new tasks or data distributions. The adversarial robustness has
decomposed features into robust and non-robust types and demonstrated that
models trained on robust features significantly enhance adversarial robustness.
However, no study has been conducted on the efficacy of robust features from
the lens of the CL model in mitigating catastrophic forgetting in CL. In this
paper, we introduce the CL robust dataset and train four baseline models on
both the standard and CL robust datasets. Our results demonstrate that the CL
models trained on the CL robust dataset experienced less catastrophic
forgetting of the previously learned tasks than when trained on the standard
dataset. Our observations highlight the significance of the features provided
to the underlying CL models, showing that CL robust features can alleviate
catastrophic forgetting.
Related papers
- Generalization Beyond Data Imbalance: A Controlled Study on CLIP for Transferable Insights [67.72413262980272]
Severe data imbalance naturally exists among web-scale vision-language datasets.
We find CLIP pre-trained thereupon exhibits notable robustness to the data imbalance compared to supervised learning.
The robustness and discriminability of CLIP improve with more descriptive language supervision, larger data scale, and broader open-world concepts.
arXiv Detail & Related papers (2024-05-31T17:57:24Z) - CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models [23.398619576886375]
Continual learning (CL) aims to help deep neural networks to learn new knowledge while retaining what has been learned.
Recently, pre-trained vision-language models such as CLIP, with powerful generalizability, have been gaining traction as practical CL candidates.
Our work proposes Continual LeArning with Probabilistic finetuning (CLAP)
arXiv Detail & Related papers (2024-03-28T04:15:58Z) - Data Poisoning for In-context Learning [49.77204165250528]
In-context learning (ICL) has been recognized for its innovative ability to adapt to new tasks.
This paper delves into the critical issue of ICL's susceptibility to data poisoning attacks.
We introduce ICLPoison, a specialized attacking framework conceived to exploit the learning mechanisms of ICL.
arXiv Detail & Related papers (2024-02-03T14:20:20Z) - A Comprehensive Study of Privacy Risks in Curriculum Learning [25.57099711643689]
Training a machine learning model with data following a meaningful order has been proven to be effective in accelerating the training process.
The key enabling technique is curriculum learning (CL), which has seen great success and has been deployed in areas like image and text classification.
Yet, how CL affects the privacy of machine learning is unclear.
arXiv Detail & Related papers (2023-10-16T07:06:38Z) - RanPAC: Random Projections and Pre-trained Models for Continual Learning [59.07316955610658]
Continual learning (CL) aims to learn different tasks (such as classification) in a non-stationary data stream without forgetting old ones.
We propose a concise and effective approach for CL with pre-trained models.
arXiv Detail & Related papers (2023-07-05T12:49:02Z) - Robustness-preserving Lifelong Learning via Dataset Condensation [11.83450966328136]
'catastrophic forgetting' refers to a notorious dilemma between improving model accuracy over new data and retaining accuracy over previous data.
We propose a new memory-replay LL strategy that leverages modern bi-level optimization techniques to determine the 'coreset' of the current data.
We term the resulting LL framework 'Data-Efficient Robustness-Preserving LL' (DERPLL)
Experimental results show that DERPLL outperforms the conventional coreset-guided LL baseline.
arXiv Detail & Related papers (2023-03-07T19:09:03Z) - Beyond Supervised Continual Learning: a Review [69.9674326582747]
Continual Learning (CL) is a flavor of machine learning where the usual assumption of stationary data distribution is relaxed or omitted.
Changes in the data distribution can cause the so-called catastrophic forgetting (CF) effect: an abrupt loss of previous knowledge.
This article reviews literature that study CL in other settings, such as learning with reduced supervision, fully unsupervised learning, and reinforcement learning.
arXiv Detail & Related papers (2022-08-30T14:44:41Z) - Self-Supervised Models are Continual Learners [79.70541692930108]
We show that self-supervised loss functions can be seamlessly converted into distillation mechanisms for Continual Learning.
We devise a framework for Continual self-supervised visual representation Learning that significantly improves the quality of the learned representations.
arXiv Detail & Related papers (2021-12-08T10:39:13Z) - When Does Contrastive Learning Preserve Adversarial Robustness from
Pretraining to Finetuning? [99.4914671654374]
We propose AdvCL, a novel adversarial contrastive pretraining framework.
We show that AdvCL is able to enhance cross-task robustness transferability without loss of model accuracy and finetuning efficiency.
arXiv Detail & Related papers (2021-11-01T17:59:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.