Active Continual Learning: On Balancing Knowledge Retention and
Learnability
- URL: http://arxiv.org/abs/2305.03923v2
- Date: Tue, 30 Jan 2024 12:24:42 GMT
- Title: Active Continual Learning: On Balancing Knowledge Retention and
Learnability
- Authors: Thuy-Trang Vu, Shahram Khadivi, Mahsa Ghorbanali, Dinh Phung and
Gholamreza Haffari
- Abstract summary: Acquiring new knowledge without forgetting what has been learned in a sequence of tasks is the central focus of continual learning (CL)
This paper considers the under-explored problem of active continual learning (ACL) for a sequence of active learning (AL) tasks.
We investigate the effectiveness and interplay between several AL and CL algorithms in the domain, class and task-incremental scenarios.
- Score: 43.6658577908349
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Acquiring new knowledge without forgetting what has been learned in a
sequence of tasks is the central focus of continual learning (CL). While tasks
arrive sequentially, the training data are often prepared and annotated
independently, leading to the CL of incoming supervised learning tasks. This
paper considers the under-explored problem of active continual learning (ACL)
for a sequence of active learning (AL) tasks, where each incoming task includes
a pool of unlabelled data and an annotation budget. We investigate the
effectiveness and interplay between several AL and CL algorithms in the domain,
class and task-incremental scenarios. Our experiments reveal the trade-off
between two contrasting goals of not forgetting the old knowledge and the
ability to quickly learn new knowledge in CL and AL, respectively. While
conditioning the AL query strategy on the annotations collected for the
previous tasks leads to improved task performance on the domain and task
incremental learning, our proposed forgetting-learning profile suggests a gap
in balancing the effect of AL and CL for the class-incremental scenario.
Related papers
- Investigating the Pre-Training Dynamics of In-Context Learning: Task Recognition vs. Task Learning [99.05401042153214]
In-context learning (ICL) is potentially attributed to two major abilities: task recognition (TR) and task learning (TL)
We take the first step by examining the pre-training dynamics of the emergence of ICL.
We propose a simple yet effective method to better integrate these two abilities for ICL at inference time.
arXiv Detail & Related papers (2024-06-20T06:37:47Z) - Mitigating Interference in the Knowledge Continuum through Attention-Guided Incremental Learning [17.236861687708096]
Attention-Guided Incremental Learning' (AGILE) is a rehearsal-based CL approach that incorporates compact task attention to effectively reduce interference between tasks.
AGILE significantly improves generalization performance by mitigating task interference and outperforming rehearsal-based approaches in several CL scenarios.
arXiv Detail & Related papers (2024-05-22T20:29:15Z) - Online Continual Learning via the Knowledge Invariant and Spread-out
Properties [4.109784267309124]
Key challenge in continual learning is catastrophic forgetting.
We propose a new method, named Online Continual Learning via the Knowledge Invariant and Spread-out Properties (OCLKISP)
We empirically evaluate our proposed method on four popular benchmarks for continual learning: Split CIFAR 100, Split SVHN, Split CUB200 and Split Tiny-Image-Net.
arXiv Detail & Related papers (2023-02-02T04:03:38Z) - Toward Sustainable Continual Learning: Detection and Knowledge
Repurposing of Similar Tasks [31.095642850920385]
We introduce a paradigm where the continual learner gets a sequence of mixed similar and dissimilar tasks.
We propose a new continual learning framework that uses a task similarity detection function that does not require additional learning.
Our experiments show that the proposed framework performs competitively on widely used computer vision benchmarks.
arXiv Detail & Related papers (2022-10-11T19:35:30Z) - Beyond Supervised Continual Learning: a Review [69.9674326582747]
Continual Learning (CL) is a flavor of machine learning where the usual assumption of stationary data distribution is relaxed or omitted.
Changes in the data distribution can cause the so-called catastrophic forgetting (CF) effect: an abrupt loss of previous knowledge.
This article reviews literature that study CL in other settings, such as learning with reduced supervision, fully unsupervised learning, and reinforcement learning.
arXiv Detail & Related papers (2022-08-30T14:44:41Z) - Theoretical Understanding of the Information Flow on Continual Learning
Performance [2.741266294612776]
Continual learning (CL) is a setting in which an agent has to learn from an incoming stream of data sequentially.
We study CL performance's relationship with information flow in the network to answer the question "How can knowledge of information flow between layers be used to alleviate CF?"
Our analysis provides novel insights of information adaptation within the layers during the incremental task learning process.
arXiv Detail & Related papers (2022-04-26T00:35:58Z) - Knowledge-Aware Meta-learning for Low-Resource Text Classification [87.89624590579903]
This paper studies a low-resource text classification problem and bridges the gap between meta-training and meta-testing tasks.
We propose KGML to introduce additional representation for each sentence learned from the extracted sentence-specific knowledge graph.
arXiv Detail & Related papers (2021-09-10T07:20:43Z) - Self-Attention Meta-Learner for Continual Learning [5.979373021392084]
Self-Attention Meta-Learner (SAM) learns a prior knowledge for continual learning that permits learning a sequence of tasks.
SAM incorporates an attention mechanism that learns to select the particular relevant representation for each future task.
We evaluate the proposed method on the Split CIFAR-10/100 and Split MNIST benchmarks in the task inference.
arXiv Detail & Related papers (2021-01-28T17:35:04Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - Curriculum Learning for Reinforcement Learning Domains: A Framework and
Survey [53.73359052511171]
Reinforcement learning (RL) is a popular paradigm for addressing sequential decision tasks in which the agent has only limited environmental feedback.
We present a framework for curriculum learning (CL) in RL, and use it to survey and classify existing CL methods in terms of their assumptions, capabilities, and goals.
arXiv Detail & Related papers (2020-03-10T20:41:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.