R-DFCIL: Relation-Guided Representation Learning for Data-Free Class
Incremental Learning
- URL: http://arxiv.org/abs/2203.13104v1
- Date: Thu, 24 Mar 2022 14:54:15 GMT
- Title: R-DFCIL: Relation-Guided Representation Learning for Data-Free Class
Incremental Learning
- Authors: Qiankun Gao, Chen Zhao, Bernard Ghanem, Jian Zhang
- Abstract summary: Class-Incremental Learning (CIL) struggles with catastrophic forgetting when learning new knowledge.
Recent DFCIL works introduce techniques such as model inversion to synthesize data for previous classes, they fail to overcome forgetting due to the severe domain gap between the synthetic and real data.
This paper proposes relation-guided representation learning (RRL) for DFCIL, dubbed R-DFCIL.
- Score: 64.7996065569457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Class-Incremental Learning (CIL) struggles with catastrophic forgetting when
learning new knowledge, and Data-Free CIL (DFCIL) is even more challenging
without access to the training data of previous classes. Though recent DFCIL
works introduce techniques such as model inversion to synthesize data for
previous classes, they fail to overcome forgetting due to the severe domain gap
between the synthetic and real data. To address this issue, this paper proposes
relation-guided representation learning (RRL) for DFCIL, dubbed R-DFCIL. In
RRL, we introduce relational knowledge distillation to flexibly transfer the
structural relation of new data from the old model to the current model. Our
RRL-boosted DFCIL can guide the current model to learn representations of new
classes better compatible with representations of previous classes, which
greatly reduces forgetting while improving plasticity. To avoid the mutual
interference between representation and classifier learning, we employ local
rather than global classification loss during RRL. After RRL, the
classification head is fine-tuned with global class-balanced classification
loss to address the data imbalance issue as well as learn the decision boundary
between new and previous classes. Extensive experiments on CIFAR100,
Tiny-ImageNet200, and ImageNet100 demonstrate that our R-DFCIL significantly
surpasses previous approaches and achieves a new state-of-the-art performance
for DFCIL.
Related papers
- Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual Learning [13.836798036474143]
Key challenge in Federated Class Continual Learning is catastrophic forgetting.
We propose a novel method of data replay based on diffusion models.
Our method significantly outperforms existing baselines.
arXiv Detail & Related papers (2024-09-02T10:07:24Z) - PILoRA: Prototype Guided Incremental LoRA for Federated Class-Incremental Learning [41.984652077669104]
Experimental results on standard datasets indicate that our method outperforms the state-of-the-art approaches significantly.
Our method exhibits strong robustness and superiority in different settings and degrees of data heterogeneity.
arXiv Detail & Related papers (2024-01-04T06:46:19Z) - Federated Latent Class Regression for Hierarchical Data [5.110894308882439]
Federated Learning (FL) allows a number of agents to participate in training a global machine learning model without disclosing locally stored data.
We propose a novel probabilistic model, Hierarchical Latent Class Regression (HLCR), and its extension to Federated Learning, FEDHLCR.
Our inference algorithm, being derived from Bayesian theory, provides strong convergence guarantees and good robustness to overfitting. Experimental results show that FEDHLCR offers fast convergence even in non-IID datasets.
arXiv Detail & Related papers (2022-06-22T00:33:04Z) - New Insights on Reducing Abrupt Representation Change in Online
Continual Learning [69.05515249097208]
We focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream.
We show that applying Experience Replay causes the newly added classes' representations to overlap significantly with the previous classes.
We propose a new method which mitigates this issue by shielding the learned representations from drastic adaptation to accommodate new classes.
arXiv Detail & Related papers (2022-03-08T01:37:00Z) - Hyperparameter-free Continuous Learning for Domain Classification in
Natural Language Understanding [60.226644697970116]
Domain classification is the fundamental task in natural language understanding (NLU)
Most existing continual learning approaches suffer from low accuracy and performance fluctuation.
We propose a hyper parameter-free continual learning model for text data that can stably produce high performance under various environments.
arXiv Detail & Related papers (2022-01-05T02:46:16Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Always Be Dreaming: A New Approach for Data-Free Class-Incremental
Learning [73.24988226158497]
We consider the high-impact problem of Data-Free Class-Incremental Learning (DFCIL)
We propose a novel incremental distillation strategy for DFCIL, contributing a modified cross-entropy training and importance-weighted feature distillation.
Our method results in up to a 25.1% increase in final task accuracy (absolute difference) compared to SOTA DFCIL methods for common class-incremental benchmarks.
arXiv Detail & Related papers (2021-06-17T17:56:08Z) - Learning Adaptive Embedding Considering Incremental Class [55.21855842960139]
Class-Incremental Learning (CIL) aims to train a reliable model with the streaming data, which emerges unknown classes sequentially.
Different from traditional closed set learning, CIL has two main challenges: 1) Novel class detection.
After the novel classes are detected, the model needs to be updated without re-training using entire previous data.
arXiv Detail & Related papers (2020-08-31T04:11:24Z) - CRL: Class Representative Learning for Image Classification [5.11566193457943]
We propose a novel model, called Class Representative Learning Model (CRL), that can be especially effective in image classification influenced by ZSL.
In the CRL model, first, the learning step is to build class representatives to represent classes in datasets by aggregating prominent features extracted from a Convolutional Neural Network (CNN)
The proposed CRL model demonstrated superior performance compared to the current state-of-the-art research in ZSL and mobile deep learning.
arXiv Detail & Related papers (2020-02-16T17:02:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.