Federated Continual Learning with Weighted Inter-client Transfer
- URL: http://arxiv.org/abs/2003.03196v5
- Date: Mon, 14 Jun 2021 07:57:18 GMT
- Title: Federated Continual Learning with Weighted Inter-client Transfer
- Authors: Jaehong Yoon, Wonyong Jeong, Giwoong Lee, Eunho Yang, Sung Ju Hwang
- Abstract summary: We propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT)
FedWeIT decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients.
We validate our FedWeIT against existing federated learning and continual learning methods, and our model significantly outperforms them with a large reduction in the communication cost.
- Score: 79.93004004545736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been a surge of interest in continual learning and federated
learning, both of which are important in deep neural networks in real-world
scenarios. Yet little research has been done regarding the scenario where each
client learns on a sequence of tasks from a private local data stream. This
problem of federated continual learning poses new challenges to continual
learning, such as utilizing knowledge from other clients, while preventing
interference from irrelevant knowledge. To resolve these issues, we propose a
novel federated continual learning framework, Federated Weighted Inter-client
Transfer (FedWeIT), which decomposes the network weights into global federated
parameters and sparse task-specific parameters, and each client receives
selective knowledge from other clients by taking a weighted combination of
their task-specific parameters. FedWeIT minimizes interference between
incompatible tasks, and also allows positive knowledge transfer across clients
during learning. We validate our FedWeIT against existing federated learning
and continual learning methods under varying degrees of task similarity across
clients, and our model significantly outperforms them with a large reduction in
the communication cost. Code is available at https://github.com/wyjeong/FedWeIT
Related papers
- FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Masked Autoencoders are Efficient Continual Federated Learners [20.856520787551453]
Continual learning should be grounded in unsupervised learning of representations that are shared across clients.
Masked autoencoders for distribution estimation are particularly amenable to this setup.
arXiv Detail & Related papers (2023-06-06T09:38:57Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - FedKNOW: Federated Continual Learning with Signature Task Knowledge
Integration at Edge [35.80543542333692]
We propose FedKNOW, an accurate and scalable federated continual learning framework.
FedKNOW is a client side solution that continuously extracts and integrates the knowledge of signature tasks.
We show that FedKNOW improves model accuracy by 63.24% without increasing model training time, reduces communication cost by 34.28%, and achieves more improvements under difficult scenarios.
arXiv Detail & Related papers (2022-12-04T04:03:44Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - FedClassAvg: Local Representation Learning for Personalized Federated
Learning on Heterogeneous Neural Networks [21.613436984547917]
We propose a novel personalized federated learning method called federated classifier averaging (FedClassAvg)
FedClassAvg aggregates weights as an agreement on decision boundaries on feature spaces.
We demonstrate it outperforms the current state-of-the-art algorithms on heterogeneous personalized federated learning tasks.
arXiv Detail & Related papers (2022-10-25T08:32:08Z) - Federated Continual Learning for Text Classification via Selective
Inter-client Transfer [21.419581793986378]
In this work, we combine the two paradigms: Federated Learning (FL) and Continual Learning (CL) for text classification task in cloud-edge continuum.
The objective of Federated Continual Learning (FCL) is to improve deep learning models over life time at each client by (relevant and efficient) knowledge transfer without sharing data.
Here, we address challenges in minimizing inter-client interference while knowledge sharing due to heterogeneous tasks across clients in FCL setup.
In doing so, we propose a novel framework, Federated Selective Inter-client Transfer (FedSeIT) which selectively combines model parameters of foreign clients.
arXiv Detail & Related papers (2022-10-12T11:24:13Z) - Addressing Client Drift in Federated Continual Learning with Adaptive
Optimization [10.303676184878896]
We outline a framework for performing Federated Continual Learning (FCL) by using NetTailor as a candidate continual learning approach.
We show that adaptive federated optimization can reduce the adverse impact of client drift and showcase its effectiveness on CIFAR100, MiniImagenet, and Decathlon benchmarks.
arXiv Detail & Related papers (2022-03-24T20:00:03Z) - FedGradNorm: Personalized Federated Gradient-Normalized Multi-Task
Learning [50.756991828015316]
Multi-task learning (MTL) is a novel framework to learn several tasks simultaneously with a single shared network.
We propose FedGradNorm which uses a dynamic-weighting method to normalize norms in order to balance learning speeds among different tasks.
arXiv Detail & Related papers (2022-03-24T17:43:12Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.