Better Generative Replay for Continual Federated Learning
- URL: http://arxiv.org/abs/2302.13001v1
- Date: Sat, 25 Feb 2023 06:26:56 GMT
- Title: Better Generative Replay for Continual Federated Learning
- Authors: Daiqing Qi, Handong Zhao, Sheng Li
- Abstract summary: Federated learning is a technique that enables a centralized server to learn from distributed clients via communications.
In this paper, we introduce the problem of continual federated learning, where clients incrementally learn new tasks and history data cannot be stored.
We propose our FedCIL model with two simple but effective solutions: model consolidation and consistency enforcement.
- Score: 20.57194599280318
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated learning is a technique that enables a centralized server to learn
from distributed clients via communications without accessing the client local
data. However, existing federated learning works mainly focus on a single task
scenario with static data. In this paper, we introduce the problem of continual
federated learning, where clients incrementally learn new tasks and history
data cannot be stored due to certain reasons, such as limited storage and data
retention policy. Generative replay based methods are effective for continual
learning without storing history data, but adapting them for this setting is
challenging. By analyzing the behaviors of clients during training, we find
that the unstable training process caused by distributed training on non-IID
data leads to a notable performance degradation. To address this problem, we
propose our FedCIL model with two simple but effective solutions: model
consolidation and consistency enforcement. Our experimental results on multiple
benchmark datasets demonstrate that our method significantly outperforms
baselines.
Related papers
- Offline Reinforcement Learning from Datasets with Structured Non-Stationarity [50.35634234137108]
Current Reinforcement Learning (RL) is often limited by the large amount of data needed to learn a successful policy.
We address a novel Offline RL problem setting in which, while collecting the dataset, the transition and reward functions gradually change between episodes but stay constant within each episode.
We propose a method based on Contrastive Predictive Coding that identifies this non-stationarity in the offline dataset, accounts for it when training a policy, and predicts it during evaluation.
arXiv Detail & Related papers (2024-05-23T02:41:36Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Don't Memorize; Mimic The Past: Federated Class Incremental Learning
Without Episodic Memory [36.4406505365313]
This paper presents a framework for federated class incremental learning that utilizes a generative model to synthesize samples from past distributions instead of storing part of past data.
The generative model is trained on the server using data-free methods at the end of each task without requesting data from clients.
arXiv Detail & Related papers (2023-07-02T07:06:45Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Federated Unlearning: How to Efficiently Erase a Client in FL? [9.346673106489742]
We propose a method to erase a client by removing the influence of their entire local data from the trained global model.
Our unlearning method achieves comparable performance as the gold standard unlearning method of federated retraining from scratch.
Unlike prior works, our unlearning method neither requires global access to the data used for training nor the history of the parameter updates to be stored by the server or any of the clients.
arXiv Detail & Related papers (2022-07-12T13:24:23Z) - Tackling Dynamics in Federated Incremental Learning with Variational
Embedding Rehearsal [27.64806509651952]
We propose a novel algorithm to address the incremental learning process in an FL scenario.
We first propose using deep Variational Embeddings that secure the privacy of the client data.
Second, we propose a server-side training method that enables a model to rehearse the previously learnt knowledge.
arXiv Detail & Related papers (2021-10-19T02:26:35Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Federated Few-Shot Learning with Adversarial Learning [30.905239262227]
We propose a few-shot learning framework to learn a few-shot classification model that can classify unseen data classes with only a few labeled samples.
We show our approaches outperform baselines by more than 10% in learning vision tasks and 5% in language tasks.
arXiv Detail & Related papers (2021-04-01T09:44:57Z) - Federated Learning with Taskonomy for Non-IID Data [0.0]
We introduce federated learning with taskonomy.
In a one-off process, the server provides the clients with a pretrained (and fine-tunable) encoder to compress their data into a latent representation, and transmit the signature of their data back to the server.
The server then learns the task-relatedness among clients via manifold learning, and performs a generalization of federated averaging.
arXiv Detail & Related papers (2021-03-29T20:47:45Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.