Weakly Supervised Continual Learning
- URL: http://arxiv.org/abs/2108.06552v1
- Date: Sat, 14 Aug 2021 14:38:20 GMT
- Title: Weakly Supervised Continual Learning
- Authors: Matteo Boschini, Pietro Buzzega, Lorenzo Bonicelli, Angelo Porrello,
Simone Calderara
- Abstract summary: This work explores Weakly Supervised Continual Learning (WSCL)
We show that not only our proposals exhibit higher flexibility when supervised information is scarce, but also that less than 25% labels can be enough to reach or even outperform SOTA methods trained under full supervision.
In doing so, we show that not only our proposals exhibit higher flexibility when supervised information is scarce, but also that less than 25% labels can be enough to reach or even outperform SOTA methods trained under full supervision.
- Score: 17.90483695137098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual Learning (CL) investigates how to train Deep Networks on a stream
of tasks without incurring catastrophic forgetting. CL settings proposed in the
literature assume that every incoming example is paired with ground-truth
annotations. However, this clashes with many real-world applications: gathering
labeled data, which is in itself tedious and expensive, becomes indeed
infeasible when data flow as a stream and must be consumed in real-time. This
work explores Weakly Supervised Continual Learning (WSCL): here, only a small
fraction of labeled input examples are shown to the learner. We assess how
current CL methods (e.g.: EWC, LwF, iCaRL, ER, GDumb, DER) perform in this
novel and challenging scenario, in which overfitting entangles forgetting.
Subsequently, we design two novel WSCL methods which exploit metric learning
and consistency regularization to leverage unsupervised data while learning. In
doing so, we show that not only our proposals exhibit higher flexibility when
supervised information is scarce, but also that less than 25% labels can be
enough to reach or even outperform SOTA methods trained under full supervision.
Related papers
- Investigating the Pre-Training Dynamics of In-Context Learning: Task Recognition vs. Task Learning [99.05401042153214]
In-context learning (ICL) is potentially attributed to two major abilities: task recognition (TR) and task learning (TL)
We take the first step by examining the pre-training dynamics of the emergence of ICL.
We propose a simple yet effective method to better integrate these two abilities for ICL at inference time.
arXiv Detail & Related papers (2024-06-20T06:37:47Z) - Continual Learning on a Diet: Learning from Sparsely Labeled Streams Under Constrained Computation [123.4883806344334]
We study a realistic Continual Learning setting where learning algorithms are granted a restricted computational budget per time step while training.
We apply this setting to large-scale semi-supervised Continual Learning scenarios with sparse label rates.
Our extensive analysis and ablations demonstrate that DietCL is stable under a full spectrum of label sparsity, computational budget, and various other ablations.
arXiv Detail & Related papers (2024-04-19T10:10:39Z) - DELTA: Decoupling Long-Tailed Online Continual Learning [7.507868991415516]
Long-Tailed Online Continual Learning (LTOCL) aims to learn new tasks from sequentially arriving class-imbalanced data streams.
We present DELTA, a decoupled learning approach designed to enhance learning representations.
We demonstrate that DELTA improves the capacity for incremental learning, surpassing existing OCL methods.
arXiv Detail & Related papers (2024-04-06T02:33:04Z) - In-Context Learning Learns Label Relationships but Is Not Conventional
Learning [60.891931501449726]
There is currently no consensus about how in-context learning (ICL) ability of Large Language Models works.
We provide novel insights into how ICL leverages label information, revealing both capabilities and limitations.
Our experiments show that ICL predictions almost always depend on in-context labels and that ICL can learn truly novel tasks in-context.
arXiv Detail & Related papers (2023-07-23T16:54:41Z) - Real-Time Evaluation in Online Continual Learning: A New Hope [104.53052316526546]
We evaluate current Continual Learning (CL) methods with respect to their computational costs.
A simple baseline outperforms state-of-the-art CL methods under this evaluation.
This surprisingly suggests that the majority of existing CL literature is tailored to a specific class of streams that is not practical.
arXiv Detail & Related papers (2023-02-02T12:21:10Z) - Beyond Supervised Continual Learning: a Review [69.9674326582747]
Continual Learning (CL) is a flavor of machine learning where the usual assumption of stationary data distribution is relaxed or omitted.
Changes in the data distribution can cause the so-called catastrophic forgetting (CF) effect: an abrupt loss of previous knowledge.
This article reviews literature that study CL in other settings, such as learning with reduced supervision, fully unsupervised learning, and reinforcement learning.
arXiv Detail & Related papers (2022-08-30T14:44:41Z) - Online Continual Learning on a Contaminated Data Stream with Blurry Task
Boundaries [17.43350151320054]
A large body of continual learning (CL) methods assumes data streams with clean labels, and online learning scenarios under noisy data streams are yet underexplored.
We consider a more practical CL task setup of an online learning from blurry data stream with corrupted labels, where existing CL methods struggle.
We propose a novel strategy to manage and use the memory by a unified approach of label noise aware diverse sampling and robust learning with semi-supervised learning.
arXiv Detail & Related papers (2022-03-29T08:52:45Z) - Learning to Count in the Crowd from Limited Labeled Data [109.2954525909007]
We focus on reducing the annotation efforts by learning to count in the crowd from limited number of labeled samples.
Specifically, we propose a Gaussian Process-based iterative learning mechanism that involves estimation of pseudo-ground truth for the unlabeled data.
arXiv Detail & Related papers (2020-07-07T04:17:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.