Non-Exemplar Online Class-incremental Continual Learning via
Dual-prototype Self-augment and Refinement
- URL: http://arxiv.org/abs/2303.10891v3
- Date: Fri, 15 Dec 2023 12:12:03 GMT
- Title: Non-Exemplar Online Class-incremental Continual Learning via
Dual-prototype Self-augment and Refinement
- Authors: Fushuo Huo, Wenchao Xu, Jingcai Guo, Haozhao Wang, and Yunfeng Fan,
Song Guo
- Abstract summary: Non-exemplar Online Class-incremental continual Learning (NO-CL) is a new, practical, but challenging problem.
It aims to preserve the discernibility of base classes without buffering data examples and efficiently learn novel classes continuously in a single-pass data stream.
We propose a novel Dual-prototype Self-augment and Refinement method (DSR) for NO-CL problem.
- Score: 21.323130310029327
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates a new, practical, but challenging problem named
Non-exemplar Online Class-incremental continual Learning (NO-CL), which aims to
preserve the discernibility of base classes without buffering data examples and
efficiently learn novel classes continuously in a single-pass (i.e., online)
data stream. The challenges of this task are mainly two-fold: (1) Both base and
novel classes suffer from severe catastrophic forgetting as no previous samples
are available for replay. (2) As the online data can only be observed once,
there is no way to fully re-train the whole model, e.g., re-calibrate the
decision boundaries via prototype alignment or feature distillation. In this
paper, we propose a novel Dual-prototype Self-augment and Refinement method
(DSR) for NO-CL problem, which consists of two strategies: 1) Dual class
prototypes: vanilla and high-dimensional prototypes are exploited to utilize
the pre-trained information and obtain robust quasi-orthogonal representations
rather than example buffers for both privacy preservation and memory reduction.
2) Self-augment and refinement: Instead of updating the whole network, we
optimize high-dimensional prototypes alternatively with the extra projection
module based on self-augment vanilla prototypes, through a bi-level
optimization problem. Extensive experiments demonstrate the effectiveness and
superiority of the proposed DSR in NO-CL.
Related papers
- I2CANSAY:Inter-Class Analogical Augmentation and Intra-Class Significance Analysis for Non-Exemplar Online Task-Free Continual Learning [42.608860809847236]
Online task-free continual learning (OTFCL) is a more challenging variant of continual learning.
Existing methods rely on a memory buffer composed of old samples to prevent forgetting.
We propose a novel framework called I2CANSAY that gets rid of the dependence on memory buffers and efficiently learns the knowledge of new data from one-shot samples.
arXiv Detail & Related papers (2024-04-21T08:28:52Z) - RanPAC: Random Projections and Pre-trained Models for Continual Learning [59.07316955610658]
Continual learning (CL) aims to learn different tasks (such as classification) in a non-stationary data stream without forgetting old ones.
We propose a concise and effective approach for CL with pre-trained models.
arXiv Detail & Related papers (2023-07-05T12:49:02Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - Automatically Discovering Novel Visual Categories with Self-supervised
Prototype Learning [68.63910949916209]
This paper tackles the problem of novel category discovery (NCD), which aims to discriminate unknown categories in large-scale image collections.
We propose a novel adaptive prototype learning method consisting of two main stages: prototypical representation learning and prototypical self-training.
We conduct extensive experiments on four benchmark datasets and demonstrate the effectiveness and robustness of the proposed method with state-of-the-art performance.
arXiv Detail & Related papers (2022-08-01T16:34:33Z) - Contrastive Prototype Learning with Augmented Embeddings for Few-Shot
Learning [58.2091760793799]
We propose a novel contrastive prototype learning with augmented embeddings (CPLAE) model.
With a class prototype as an anchor, CPL aims to pull the query samples of the same class closer and those of different classes further away.
Extensive experiments on several benchmarks demonstrate that our proposed CPLAE achieves new state-of-the-art.
arXiv Detail & Related papers (2021-01-23T13:22:44Z) - Learning Adaptive Embedding Considering Incremental Class [55.21855842960139]
Class-Incremental Learning (CIL) aims to train a reliable model with the streaming data, which emerges unknown classes sequentially.
Different from traditional closed set learning, CIL has two main challenges: 1) Novel class detection.
After the novel classes are detected, the model needs to be updated without re-training using entire previous data.
arXiv Detail & Related papers (2020-08-31T04:11:24Z) - Prior Guided Feature Enrichment Network for Few-Shot Segmentation [64.91560451900125]
State-of-the-art semantic segmentation methods require sufficient labeled data to achieve good results.
Few-shot segmentation is proposed to tackle this problem by learning a model that quickly adapts to new classes with a few labeled support samples.
Theses frameworks still face the challenge of generalization ability reduction on unseen classes due to inappropriate use of high-level semantic information.
arXiv Detail & Related papers (2020-08-04T10:41:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.