Beyond Supervised Continual Learning: a Review
- URL: http://arxiv.org/abs/2208.14307v1
- Date: Tue, 30 Aug 2022 14:44:41 GMT
- Title: Beyond Supervised Continual Learning: a Review
- Authors: Benedikt Bagus, Alexander Gepperth, Timoth\'ee Lesort
- Abstract summary: Continual Learning (CL) is a flavor of machine learning where the usual assumption of stationary data distribution is relaxed or omitted.
Changes in the data distribution can cause the so-called catastrophic forgetting (CF) effect: an abrupt loss of previous knowledge.
This article reviews literature that study CL in other settings, such as learning with reduced supervision, fully unsupervised learning, and reinforcement learning.
- Score: 69.9674326582747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual Learning (CL, sometimes also termed incremental learning) is a
flavor of machine learning where the usual assumption of stationary data
distribution is relaxed or omitted. When naively applying, e.g., DNNs in CL
problems, changes in the data distribution can cause the so-called catastrophic
forgetting (CF) effect: an abrupt loss of previous knowledge. Although many
significant contributions to enabling CL have been made in recent years, most
works address supervised (classification) problems. This article reviews
literature that study CL in other settings, such as learning with reduced
supervision, fully unsupervised learning, and reinforcement learning. Besides
proposing a simple schema for classifying CL approaches w.r.t. their level of
autonomy and supervision, we discuss the specific challenges associated with
each setting and the potential contributions to the field of CL in general.
Related papers
- What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights [67.72413262980272]
Severe data imbalance naturally exists among web-scale vision-language datasets.
We find CLIP pre-trained thereupon exhibits notable robustness to the data imbalance compared to supervised learning.
The robustness and discriminability of CLIP improve with more descriptive language supervision, larger data scale, and broader open-world concepts.
arXiv Detail & Related papers (2024-05-31T17:57:24Z) - CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning [38.063942750061585]
We introduce a novel approach, CoLeCLIP, that learns an open-domain CL model based on CLIP.
CoLeCLIP outperforms state-of-the-art methods for open-domain CL under both task- and class-incremental learning settings.
arXiv Detail & Related papers (2024-03-15T12:28:21Z) - Dynamic Sub-graph Distillation for Robust Semi-supervised Continual
Learning [52.046037471678005]
We focus on semi-supervised continual learning (SSCL), where the model progressively learns from partially labeled data with unknown categories.
We propose a novel approach called Dynamic Sub-Graph Distillation (DSGD) for semi-supervised continual learning.
arXiv Detail & Related papers (2023-12-27T04:40:12Z) - POP: Prompt Of Prompts for Continual Learning [59.15888651733645]
Continual learning (CL) aims to mimic the human ability to learn new concepts without catastrophic forgetting.
We show that a foundation model equipped with POP learning is able to outperform classic CL methods by a significant margin.
arXiv Detail & Related papers (2023-06-14T02:09:26Z) - On the Effectiveness of Equivariant Regularization for Robust Online
Continual Learning [17.995662644298974]
Continual Learning (CL) approaches seek to bridge this gap by facilitating the transfer of knowledge to both previous tasks and future ones.
Recent research has shown that self-supervision can produce versatile models that can generalize well to diverse downstream tasks.
We propose Continual Learning via Equivariant Regularization (CLER), an OCL approach that leverages equivariant tasks for self-supervision.
arXiv Detail & Related papers (2023-05-05T16:10:31Z) - A Study of Continual Learning Methods for Q-Learning [78.6363825307044]
We present an empirical study on the use of continual learning (CL) methods in a reinforcement learning (RL) scenario.
Our results show that dedicated CL methods can significantly improve learning when compared to the baseline technique of "experience replay"
arXiv Detail & Related papers (2022-06-08T14:51:52Z) - Weakly Supervised Continual Learning [17.90483695137098]
This work explores Weakly Supervised Continual Learning (WSCL)
We show that not only our proposals exhibit higher flexibility when supervised information is scarce, but also that less than 25% labels can be enough to reach or even outperform SOTA methods trained under full supervision.
In doing so, we show that not only our proposals exhibit higher flexibility when supervised information is scarce, but also that less than 25% labels can be enough to reach or even outperform SOTA methods trained under full supervision.
arXiv Detail & Related papers (2021-08-14T14:38:20Z) - Continual Lifelong Learning in Natural Language Processing: A Survey [3.9103337761169943]
Continual learning (CL) aims to enable information systems to learn from a continuous data stream across time.
It is difficult for existing deep learning architectures to learn a new task without largely forgetting previously acquired knowledge.
We look at the problem of CL through the lens of various NLP tasks.
arXiv Detail & Related papers (2020-12-17T18:44:36Z) - A Survey on Curriculum Learning [48.36129047271622]
Curriculum learning (CL) is a training strategy that trains a machine learning model from easier data to harder data.
As an easy-to-use plug-in, the CL strategy has demonstrated its power in improving the generalization capacity and convergence rate of various models.
arXiv Detail & Related papers (2020-10-25T17:15:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.