TARGET: Federated Class-Continual Learning via Exemplar-Free
Distillation
- URL: http://arxiv.org/abs/2303.06937v3
- Date: Thu, 17 Aug 2023 08:09:19 GMT
- Title: TARGET: Federated Class-Continual Learning via Exemplar-Free
Distillation
- Authors: Jie Zhang, Chen Chen, Weiming Zhuang, Lingjuan Lv
- Abstract summary: This paper focuses on an under-explored yet important problem: Federated Class-Continual Learning (FCCL)
Existing FCCL works suffer from various limitations, such as requiring additional datasets or storing the private data from previous tasks.
We propose a novel method called TARGET, which alleviates catastrophic forgetting in FCCL while preserving client data privacy.
- Score: 9.556059871106351
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on an under-explored yet important problem: Federated
Class-Continual Learning (FCCL), where new classes are dynamically added in
federated learning. Existing FCCL works suffer from various limitations, such
as requiring additional datasets or storing the private data from previous
tasks. In response, we first demonstrate that non-IID data exacerbates
catastrophic forgetting issue in FL. Then we propose a novel method called
TARGET (federat\textbf{T}ed cl\textbf{A}ss-continual lea\textbf{R}nin\textbf{G}
via \textbf{E}xemplar-free dis\textbf{T}illation), which alleviates
catastrophic forgetting in FCCL while preserving client data privacy. Our
proposed method leverages the previously trained global model to transfer
knowledge of old tasks to the current task at the model level. Moreover, a
generator is trained to produce synthetic data to simulate the global
distribution of data on each client at the data level. Compared to previous
FCCL methods, TARGET does not require any additional datasets or storing real
data from previous tasks, which makes it ideal for data-sensitive scenarios.
Related papers
- Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual Learning [13.836798036474143]
Key challenge in Federated Class Continual Learning is catastrophic forgetting.
We propose a novel method of data replay based on diffusion models.
Our method significantly outperforms existing baselines.
arXiv Detail & Related papers (2024-09-02T10:07:24Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Fake It Till Make It: Federated Learning with Consensus-Oriented
Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-10T18:49:59Z) - A Data-Free Approach to Mitigate Catastrophic Forgetting in Federated
Class Incremental Learning for Vision Tasks [34.971800168823215]
This paper presents a framework for $textbffederated class incremental learning$ that utilizes a generative model to synthesize samples from past distributions.
To preserve privacy, the generative model is trained on the server using data-free methods at the end of each task without requesting data from clients.
arXiv Detail & Related papers (2023-11-13T22:21:27Z) - Don't Memorize; Mimic The Past: Federated Class Incremental Learning
Without Episodic Memory [36.4406505365313]
This paper presents a framework for federated class incremental learning that utilizes a generative model to synthesize samples from past distributions instead of storing part of past data.
The generative model is trained on the server using data-free methods at the end of each task without requesting data from clients.
arXiv Detail & Related papers (2023-07-02T07:06:45Z) - Dealing with Cross-Task Class Discrimination in Online Continual
Learning [54.31411109376545]
This paper argues for another challenge in class-incremental learning (CIL)
How to establish decision boundaries between the classes of the new task and old tasks with no (or limited) access to the old task data.
A replay method saves a small amount of data (replay data) from previous tasks. When a batch of current task data arrives, the system jointly trains the new data and some sampled replay data.
This paper argues that the replay approach also has a dynamic training bias issue which reduces the effectiveness of the replay data in solving the CTCD problem.
arXiv Detail & Related papers (2023-05-24T02:52:30Z) - Going beyond research datasets: Novel intent discovery in the industry
setting [60.90117614762879]
This paper proposes methods to improve the intent discovery pipeline deployed in a large e-commerce platform.
We show the benefit of pre-training language models on in-domain data: both self-supervised and with weak supervision.
We also devise the best method to utilize the conversational structure (i.e., question and answer) of real-life datasets during fine-tuning for clustering tasks, which we call Conv.
arXiv Detail & Related papers (2023-05-09T14:21:29Z) - Preventing Zero-Shot Transfer Degradation in Continual Learning of
Vision-Language Models [13.340759455910721]
We propose a novel method to prevent zero-shot transfer degradation in the continual learning of vision-language models.
Our method outperforms other methods in the traditional class-incremental learning setting.
arXiv Detail & Related papers (2023-03-12T10:28:07Z) - Task Residual for Tuning Vision-Language Models [69.22958802711017]
We propose a new efficient tuning approach for vision-language models (VLMs) named Task Residual Tuning (TaskRes)
TaskRes explicitly decouples the prior knowledge of the pre-trained models and new knowledge regarding a target task.
The proposed TaskRes is simple yet effective, which significantly outperforms previous methods on 11 benchmark datasets.
arXiv Detail & Related papers (2022-11-18T15:09:03Z) - Learning Across Domains and Devices: Style-Driven Source-Free Domain
Adaptation in Clustered Federated Learning [32.098954477227046]
We propose a novel task in which the clients' data is unlabeled and the server accesses a source labeled dataset for pre-training only.
Our experiments show that our algorithm is able to efficiently tackle the new task outperforming existing approaches.
arXiv Detail & Related papers (2022-10-05T15:23:52Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.