Effects of Auxiliary Knowledge on Continual Learning
- URL: http://arxiv.org/abs/2206.02577v1
- Date: Fri, 3 Jun 2022 14:31:59 GMT
- Title: Effects of Auxiliary Knowledge on Continual Learning
- Authors: Giovanni Bellitto, Matteo Pennisi, Simone Palazzo, Lorenzo Bonicelli,
Matteo Boschini, Simone Calderara, Concetto Spampinato
- Abstract summary: In Continual Learning (CL), a neural network is trained on a stream of data whose distribution changes over time.
Most existing CL approaches focus on finding solutions to preserve acquired knowledge, so working on the past of the model.
We argue that as the model has to continually learn new tasks, it is also important to put focus on the present knowledge that could improve following tasks learning.
- Score: 16.84113206569365
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Continual Learning (CL), a neural network is trained on a stream of data
whose distribution changes over time. In this context, the main problem is how
to learn new information without forgetting old knowledge (i.e., Catastrophic
Forgetting). Most existing CL approaches focus on finding solutions to preserve
acquired knowledge, so working on the past of the model. However, we argue that
as the model has to continually learn new tasks, it is also important to put
focus on the present knowledge that could improve following tasks learning. In
this paper we propose a new, simple, CL algorithm that focuses on solving the
current task in a way that might facilitate the learning of the next ones. More
specifically, our approach combines the main data stream with a secondary,
diverse and uncorrelated stream, from which the network can draw auxiliary
knowledge. This helps the model from different perspectives, since auxiliary
data may contain useful features for the current and the next tasks and
incoming task classes can be mapped onto auxiliary classes. Furthermore, the
addition of data to the current task is implicitly making the classifier more
robust as we are forcing the extraction of more discriminative features. Our
method can outperform existing state-of-the-art models on the most common CL
Image Classification benchmarks.
Related papers
- Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for Continual Learning [70.64617500380287]
Continual learning allows models to learn from new data while retaining previously learned knowledge.
The semantic knowledge available in the label information of the images, offers important semantic information that can be related with previously acquired knowledge of semantic classes.
We propose integrating semantic guidance within and across tasks by capturing semantic similarity using text embeddings.
arXiv Detail & Related papers (2024-08-02T07:51:44Z) - Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning [22.13331870720021]
We propose a beyond prompt learning approach to the RFCL task, called Continual Adapter (C-ADA)
C-ADA flexibly extends specific weights in CAL to learn new knowledge for each task and freezes old weights to preserve prior knowledge.
Our approach achieves significantly improved performance and training speed, outperforming the current state-of-the-art (SOTA) method.
arXiv Detail & Related papers (2024-07-14T17:40:40Z) - Fine-Grained Knowledge Selection and Restoration for Non-Exemplar Class
Incremental Learning [64.14254712331116]
Non-exemplar class incremental learning aims to learn both the new and old tasks without accessing any training data from the past.
We propose a novel framework of fine-grained knowledge selection and restoration.
arXiv Detail & Related papers (2023-12-20T02:34:11Z) - Learning without Forgetting for Vision-Language Models [65.49600786387106]
Class-Incremental Learning (CIL) or continual learning is a desired capability in the real world.
Recent advances in Vision-Language Models (VLM) have shown promising capabilities in learning generalizable representations.
We propose PROjectiOn Fusion (PROOF) that enables VLMs to learn without forgetting.
arXiv Detail & Related papers (2023-05-30T17:59:32Z) - Beyond Not-Forgetting: Continual Learning with Backward Knowledge
Transfer [39.99577526417276]
In continual learning (CL) an agent can improve the learning performance of both a new task and old' tasks.
Most existing CL methods focus on addressing catastrophic forgetting in neural networks by minimizing the modification of the learnt model for old tasks.
We propose a new CL method with Backward knowlEdge tRansfer (CUBER) for a fixed capacity neural network without data replay.
arXiv Detail & Related papers (2022-11-01T23:55:51Z) - Center Loss Regularization for Continual Learning [0.0]
In general, neural networks lack the ability to learn different tasks sequentially.
Our approach remembers old tasks by projecting the representations of new tasks close to that of old tasks.
We demonstrate that our approach is scalable, effective, and gives competitive performance compared to state-of-the-art continual learning methods.
arXiv Detail & Related papers (2021-10-21T17:46:44Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Meta-learning the Learning Trends Shared Across Tasks [123.10294801296926]
Gradient-based meta-learning algorithms excel at quick adaptation to new tasks with limited data.
Existing meta-learning approaches only depend on the current task information during the adaptation.
We propose a 'Path-aware' model-agnostic meta-learning approach.
arXiv Detail & Related papers (2020-10-19T08:06:47Z) - Continual Class Incremental Learning for CT Thoracic Segmentation [36.45569352490318]
Deep learning organ segmentation approaches require large amounts of annotated training data, which is limited in supply due to reasons of confidentiality and the time required for expert manual annotation.
Being able to train models incrementally without having access to previously used data is desirable.
In this setting, a model learns a new task effectively, but loses performance on previously learned tasks.
The Learning without Forgetting (LwF) approach addresses this issue via replaying its own prediction for past tasks during model training.
We show that LwF can successfully retain knowledge on previous segmentations, however, its ability to learn a new class decreases with the
arXiv Detail & Related papers (2020-08-12T20:08:39Z) - Self-Supervised Learning Aided Class-Incremental Lifelong Learning [17.151579393716958]
We study the issue of catastrophic forgetting in class-incremental learning (Class-IL)
In training procedure of Class-IL, as the model has no knowledge about following tasks, it would only extract features necessary for tasks learned so far, whose information is insufficient for joint classification.
We propose to combine self-supervised learning, which can provide effective representations without requiring labels, with Class-IL to partly get around this problem.
arXiv Detail & Related papers (2020-06-10T15:15:27Z) - Continual Representation Learning for Biometric Identification [47.15075374158398]
We propose a new continual learning (CL) setting, namely continual representation learning'', which focuses on learning better representation in a continuous way.
We demonstrate that existing CL methods can improve the representation in the new setting, and our method achieves better results than the competitors.
arXiv Detail & Related papers (2020-06-08T10:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.