Always Be Dreaming: A New Approach for Data-Free Class-Incremental
Learning
- URL: http://arxiv.org/abs/2106.09701v1
- Date: Thu, 17 Jun 2021 17:56:08 GMT
- Title: Always Be Dreaming: A New Approach for Data-Free Class-Incremental
Learning
- Authors: James Smith, Yen-Chang Hsu, Jonathan Balloch, Yilin Shen, Hongxia Jin,
Zsolt Kira
- Abstract summary: We consider the high-impact problem of Data-Free Class-Incremental Learning (DFCIL)
We propose a novel incremental distillation strategy for DFCIL, contributing a modified cross-entropy training and importance-weighted feature distillation.
Our method results in up to a 25.1% increase in final task accuracy (absolute difference) compared to SOTA DFCIL methods for common class-incremental benchmarks.
- Score: 73.24988226158497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern computer vision applications suffer from catastrophic forgetting when
incrementally learning new concepts over time. The most successful approaches
to alleviate this forgetting require extensive replay of previously seen data,
which is problematic when memory constraints or data legality concerns exist.
In this work, we consider the high-impact problem of Data-Free
Class-Incremental Learning (DFCIL), where an incremental learning agent must
learn new concepts over time without storing generators or training data from
past tasks. One approach for DFCIL is to replay synthetic images produced by
inverting a frozen copy of the learner's classification model, but we show this
approach fails for common class-incremental benchmarks when using standard
distillation strategies. We diagnose the cause of this failure and propose a
novel incremental distillation strategy for DFCIL, contributing a modified
cross-entropy training and importance-weighted feature distillation, and show
that our method results in up to a 25.1% increase in final task accuracy
(absolute difference) compared to SOTA DFCIL methods for common
class-incremental benchmarks. Our method even outperforms several standard
replay based methods which store a coreset of images.
Related papers
- Reducing Catastrophic Forgetting in Online Class Incremental Learning Using Self-Distillation [3.8506666685467343]
In continual learning, previous knowledge is forgotten when a model learns new tasks.
In this paper, we tried to solve this problem by acquiring transferable knowledge through self-distillation.
Our proposed method outperformed conventional methods by experiments on CIFAR10, CIFAR100, and MiniimageNet datasets.
arXiv Detail & Related papers (2024-09-17T16:26:33Z) - Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual Learning [13.836798036474143]
Key challenge in Federated Class Continual Learning is catastrophic forgetting.
We propose a novel method of data replay based on diffusion models.
Our method significantly outperforms existing baselines.
arXiv Detail & Related papers (2024-09-02T10:07:24Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Prompt-Based Exemplar Super-Compression and Regeneration for
Class-Incremental Learning [22.676222987218555]
Super-compression and regeneration method, ESCORT, substantially increases the quantity and enhances the diversity of exemplars.
To minimize the domain gap between generated exemplars and real images, we propose partial compression and diffusion-based data augmentation.
arXiv Detail & Related papers (2023-11-30T05:59:31Z) - BOOT: Data-free Distillation of Denoising Diffusion Models with
Bootstrapping [64.54271680071373]
Diffusion models have demonstrated excellent potential for generating diverse images.
Knowledge distillation has been recently proposed as a remedy that can reduce the number of inference steps to one or a few.
We present a novel technique called BOOT, that overcomes limitations with an efficient data-free distillation algorithm.
arXiv Detail & Related papers (2023-06-08T20:30:55Z) - Adaptive Cross Batch Normalization for Metric Learning [75.91093210956116]
Metric learning is a fundamental problem in computer vision.
We show that it is equally important to ensure that the accumulated embeddings are up to date.
In particular, it is necessary to circumvent the representational drift between the accumulated embeddings and the feature embeddings at the current training iteration.
arXiv Detail & Related papers (2023-03-30T03:22:52Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free
Replay [52.251188477192336]
Few-shot class-incremental learning (FSCIL) has been proposed aiming to enable a deep learning system to incrementally learn new classes with limited data.
We show through empirical results that adopting the data replay is surprisingly favorable.
We propose using data-free replay that can synthesize data by a generator without accessing real data.
arXiv Detail & Related papers (2022-07-22T17:30:51Z) - IB-DRR: Incremental Learning with Information-Back Discrete
Representation Replay [4.8666876477091865]
Incremental learning aims to enable machine learning models to continuously acquire new knowledge given new classes.
Saving a subset of training samples of previously seen classes in the memory and replaying them during new training phases is proven to be an efficient and effective way to fulfil this aim.
However, finding a trade-off between the model performance and the number of samples to save for each class is still an open problem for replay-based incremental learning.
arXiv Detail & Related papers (2021-04-21T15:32:11Z) - Class-incremental Learning with Rectified Feature-Graph Preservation [24.098892115785066]
A central theme of this paper is to learn new classes that arrive in sequential phases over time.
We propose a weighted-Euclidean regularization for old knowledge preservation.
We show how it can work with binary cross-entropy to increase class separation for effective learning of new classes.
arXiv Detail & Related papers (2020-12-15T07:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.