Tackling Dynamics in Federated Incremental Learning with Variational
Embedding Rehearsal
- URL: http://arxiv.org/abs/2110.09695v1
- Date: Tue, 19 Oct 2021 02:26:35 GMT
- Title: Tackling Dynamics in Federated Incremental Learning with Variational
Embedding Rehearsal
- Authors: Tae Jin Park and Kenichi Kumatani and Dimitrios Dimitriadis
- Abstract summary: We propose a novel algorithm to address the incremental learning process in an FL scenario.
We first propose using deep Variational Embeddings that secure the privacy of the client data.
Second, we propose a server-side training method that enables a model to rehearse the previously learnt knowledge.
- Score: 27.64806509651952
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning is a fast growing area of ML where the training datasets
are extremely distributed, all while dynamically changing over time. Models
need to be trained on clients' devices without any guarantees for either
homogeneity or stationarity of the local private data. The need for continual
training has also risen, due to the ever-increasing production of in-task data.
However, pursuing both directions at the same time is challenging, since client
data privacy is a major constraint, especially for rehearsal methods. Herein,
we propose a novel algorithm to address the incremental learning process in an
FL scenario, based on realistic client enrollment scenarios where clients can
drop in or out dynamically. We first propose using deep Variational Embeddings
that secure the privacy of the client data. Second, we propose a server-side
training method that enables a model to rehearse the previously learnt
knowledge. Finally, we investigate the performance of federated incremental
learning in dynamic client enrollment scenarios. The proposed method shows
parity with offline training on domain-incremental learning, addressing
challenges in both the dynamic enrollment of clients and the domain shifting of
client data.
Related papers
- FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Adapter-based Selective Knowledge Distillation for Federated
Multi-domain Meeting Summarization [36.916155654985936]
Meeting summarization has emerged as a promising technique for providing users with condensed summaries.
We propose adapter-based Federated Selective Knowledge Distillation (AdaFedSelecKD) for training performant client models.
arXiv Detail & Related papers (2023-08-07T03:34:01Z) - Elastically-Constrained Meta-Learner for Federated Learning [3.032797107899338]
Federated learning is an approach to collaboratively machine learning models for multiple parties that prohibit data sharing.
One of the challenges in federated learning is non-constrained data between clients, as a model can not fit data distribution for all clients.
arXiv Detail & Related papers (2023-06-29T05:58:47Z) - SalientGrads: Sparse Models for Communication Efficient and Data Aware
Distributed Federated Training [1.0413504599164103]
Federated learning (FL) enables the training of a model leveraging decentralized data in client sites while preserving privacy by not collecting data.
One of the significant challenges of FL is limited computation and low communication bandwidth in resource limited edge client nodes.
We propose Salient Grads, which simplifies the process of sparse training by choosing a data aware subnetwork before training.
arXiv Detail & Related papers (2023-04-15T06:46:37Z) - Better Generative Replay for Continual Federated Learning [20.57194599280318]
Federated learning is a technique that enables a centralized server to learn from distributed clients via communications.
In this paper, we introduce the problem of continual federated learning, where clients incrementally learn new tasks and history data cannot be stored.
We propose our FedCIL model with two simple but effective solutions: model consolidation and consistency enforcement.
arXiv Detail & Related papers (2023-02-25T06:26:56Z) - DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics [60.60173139258481]
Local training on non-iid distributed data results in deflected local optimum.
A natural solution is to gather all client data onto the server, such that the server has a global view of the entire data distribution.
In this paper, we put forth an idea to collect and leverage global knowledge on the server without hindering data privacy.
arXiv Detail & Related papers (2022-11-20T06:13:06Z) - Online Meta-Learning for Model Update Aggregation in Federated Learning
for Click-Through Rate Prediction [2.9649783577150837]
We propose a simple online meta-learning method to learn a strategy of aggregating the model updates.
Our method significantly outperforms the state-of-the-art in both the speed of convergence and the quality of the final learning results.
arXiv Detail & Related papers (2022-08-30T18:13:53Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Federated Continual Learning with Weighted Inter-client Transfer [79.93004004545736]
We propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT)
FedWeIT decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients.
We validate our FedWeIT against existing federated learning and continual learning methods, and our model significantly outperforms them with a large reduction in the communication cost.
arXiv Detail & Related papers (2020-03-06T13:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.