Class Impression for Data-free Incremental Learning
- URL: http://arxiv.org/abs/2207.00005v2
- Date: Mon, 4 Jul 2022 15:09:55 GMT
- Title: Class Impression for Data-free Incremental Learning
- Authors: Sana Ayromlou and Purang Abolmaesumi and Teresa Tsang and Xiaoxiao Li
- Abstract summary: Deep learning-based classification approaches require collecting all samples from all classes in advance and are trained offline.
This paradigm may not be practical in real-world clinical applications, where new classes are incrementally introduced through the addition of new data.
We propose a novel data-free class incremental learning framework that first synthesizes data from the model trained on previous classes to generate a ours.
- Score: 20.23329169244367
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Standard deep learning-based classification approaches require collecting all
samples from all classes in advance and are trained offline. This paradigm may
not be practical in real-world clinical applications, where new classes are
incrementally introduced through the addition of new data. Class incremental
learning is a strategy allowing learning from such data. However, a major
challenge is catastrophic forgetting, i.e., performance degradation on previous
classes when adapting a trained model to new data. Prior methodologies to
alleviate this challenge save a portion of training data require perpetual
storage of such data that may introduce privacy issues. Here, we propose a
novel data-free class incremental learning framework that first synthesizes
data from the model trained on previous classes to generate a \ours.
Subsequently, it updates the model by combining the synthesized data with new
class data. Furthermore, we incorporate a cosine normalized Cross-entropy loss
to mitigate the adverse effects of the imbalance, a margin loss to increase
separation among previous classes and new ones, and an intra-domain contrastive
loss to generalize the model trained on the synthesized data to real data. We
compare our proposed framework with state-of-the-art methods in class
incremental learning, where we demonstrate improvement in accuracy for the
classification of 11,062 echocardiography cine series of patients.
Related papers
- Few-Shot Class-Incremental Learning with Non-IID Decentralized Data [12.472285188772544]
Few-shot class-incremental learning is crucial for developing scalable and adaptive intelligent systems.
This paper introduces federated few-shot class-incremental learning, a decentralized machine learning paradigm.
We present a synthetic data-driven framework that leverages replay buffer data to maintain existing knowledge and facilitate the acquisition of new knowledge.
arXiv Detail & Related papers (2024-09-18T02:48:36Z) - CCSI: Continual Class-Specific Impression for Data-free Class Incremental Learning [22.37848405465699]
Class incremental learning offers a promising solution by adapting a deep network trained on specific disease classes to handle new diseases.
Prior proposed methodologies to overcome this require perpetual storage of previous samples.
We propose a novel data-free class incremental learning framework that utilizes data synthesis on learned classes instead of data storage from previous classes.
arXiv Detail & Related papers (2024-06-09T03:52:21Z) - Partially Blinded Unlearning: Class Unlearning for Deep Networks a Bayesian Perspective [4.31734012105466]
Machine Unlearning is the process of selectively discarding information designated to specific sets or classes of data from a pre-trained model.
We propose a methodology tailored for the purposeful elimination of information linked to a specific class of data from a pre-trained classification network.
Our novel approach, termed textbfPartially-Blinded Unlearning (PBU), surpasses existing state-of-the-art class unlearning methods, demonstrating superior effectiveness.
arXiv Detail & Related papers (2024-03-24T17:33:22Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Neural Collapse Terminus: A Unified Solution for Class Incremental
Learning and Its Variants [166.916517335816]
In this paper, we offer a unified solution to the misalignment dilemma in the three tasks.
We propose neural collapse terminus that is a fixed structure with the maximal equiangular inter-class separation for the whole label space.
Our method holds the neural collapse optimality in an incremental fashion regardless of data imbalance or data scarcity.
arXiv Detail & Related papers (2023-08-03T13:09:59Z) - On-the-fly Denoising for Data Augmentation in Natural Language
Understanding [101.46848743193358]
We propose an on-the-fly denoising technique for data augmentation that learns from soft augmented labels provided by an organic teacher model trained on the cleaner original data.
Our method can be applied to general augmentation techniques and consistently improve the performance on both text classification and question-answering tasks.
arXiv Detail & Related papers (2022-12-20T18:58:33Z) - Prototypical quadruplet for few-shot class incremental learning [24.814045065163135]
We propose a novel method that improves classification robustness by identifying a better embedding space using an improved contrasting loss.
Our approach retains previously acquired knowledge in the embedding space, even when trained with new classes.
We demonstrate the effectiveness of our method by showing that the embedding space remains intact after training the model with new classes and outperforms existing state-of-the-art algorithms in terms of accuracy across different sessions.
arXiv Detail & Related papers (2022-11-05T17:19:14Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Learning to Generate Novel Classes for Deep Metric Learning [24.048915378172012]
We introduce a new data augmentation approach that synthesizes novel classes and their embedding vectors.
We implement this idea by learning and exploiting a conditional generative model, which, given a class label and a noise, produces a random embedding vector of the class.
Our proposed generator allows the loss to use richer class relations by augmenting realistic and diverse classes, resulting in better generalization to unseen samples.
arXiv Detail & Related papers (2022-01-04T06:55:19Z) - Bridging Non Co-occurrence with Unlabeled In-the-wild Data for
Incremental Object Detection [56.22467011292147]
Several incremental learning methods are proposed to mitigate catastrophic forgetting for object detection.
Despite the effectiveness, these methods require co-occurrence of the unlabeled base classes in the training data of the novel classes.
We propose the use of unlabeled in-the-wild data to bridge the non-occurrence caused by the missing base classes during the training of additional novel classes.
arXiv Detail & Related papers (2021-10-28T10:57:25Z) - Learning Adaptive Embedding Considering Incremental Class [55.21855842960139]
Class-Incremental Learning (CIL) aims to train a reliable model with the streaming data, which emerges unknown classes sequentially.
Different from traditional closed set learning, CIL has two main challenges: 1) Novel class detection.
After the novel classes are detected, the model needs to be updated without re-training using entire previous data.
arXiv Detail & Related papers (2020-08-31T04:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.