Elastically-Constrained Meta-Learner for Federated Learning
- URL: http://arxiv.org/abs/2306.16703v3
- Date: Sat, 5 Aug 2023 08:36:23 GMT
- Title: Elastically-Constrained Meta-Learner for Federated Learning
- Authors: Peng Lan, Donglai Chen, Chong Xie, Keshu Chen, Jinyuan He, Juntao
Zhang, Yonghong Chen and Yan Xu
- Abstract summary: Federated learning is an approach to collaboratively machine learning models for multiple parties that prohibit data sharing.
One of the challenges in federated learning is non-constrained data between clients, as a model can not fit data distribution for all clients.
- Score: 3.032797107899338
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Federated learning is an approach to collaboratively training machine
learning models for multiple parties that prohibit data sharing. One of the
challenges in federated learning is non-IID data between clients, as a single
model can not fit the data distribution for all clients. Meta-learning, such as
Per-FedAvg, is introduced to cope with the challenge. Meta-learning learns
shared initial parameters for all clients. Each client employs gradient descent
to adapt the initialization to local data distributions quickly to realize
model personalization. However, due to non-convex loss function and randomness
of sampling update, meta-learning approaches have unstable goals in local
adaptation for the same client. This fluctuation in different adaptation
directions hinders the convergence in meta-learning. To overcome this
challenge, we use the historical local adapted model to restrict the direction
of the inner loop and propose an elastic-constrained method. As a result, the
current round inner loop keeps historical goals and adapts to better solutions.
Experiments show our method boosts meta-learning convergence and improves
personalization without additional calculation and communication. Our method
achieved SOTA on all metrics in three public datasets.
Related papers
- Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Adapter-based Selective Knowledge Distillation for Federated
Multi-domain Meeting Summarization [36.916155654985936]
Meeting summarization has emerged as a promising technique for providing users with condensed summaries.
We propose adapter-based Federated Selective Knowledge Distillation (AdaFedSelecKD) for training performant client models.
arXiv Detail & Related papers (2023-08-07T03:34:01Z) - Re-Weighted Softmax Cross-Entropy to Control Forgetting in Federated
Learning [14.196701066823499]
In Federated Learning, a global model is learned by aggregating model updates computed at a set of independent client nodes.
We show that individual client models experience a catastrophic forgetting with respect to data from other clients.
We propose an efficient approach that modifies the cross-entropy objective on a per-client basis by re-weighting the softmax logits prior to computing the loss.
arXiv Detail & Related papers (2023-04-11T14:51:55Z) - Meta Knowledge Condensation for Federated Learning [65.20774786251683]
Existing federated learning paradigms usually extensively exchange distributed models at a central solver to achieve a more powerful model.
This would incur severe communication burden between a server and multiple clients especially when data distributions are heterogeneous.
Unlike existing paradigms, we introduce an alternative perspective to significantly decrease the communication cost in federate learning.
arXiv Detail & Related papers (2022-09-29T15:07:37Z) - Online Meta-Learning for Model Update Aggregation in Federated Learning
for Click-Through Rate Prediction [2.9649783577150837]
We propose a simple online meta-learning method to learn a strategy of aggregating the model updates.
Our method significantly outperforms the state-of-the-art in both the speed of convergence and the quality of the final learning results.
arXiv Detail & Related papers (2022-08-30T18:13:53Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - A Personalized Federated Learning Algorithm: an Application in Anomaly
Detection [0.6700873164609007]
Federated Learning (FL) has recently emerged as a promising method to overcome data privacy and transmission issues.
In FL, datasets collected from different devices or sensors are used to train local models (clients) each of which shares its learning with a centralized model (server)
This paper proposes a novel Personalized FedAvg (PC-FedAvg) which aims to control weights communication and aggregation augmented with a tailored learning algorithm to personalize the resulting models at each client.
arXiv Detail & Related papers (2021-11-04T04:57:11Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Personalized Federated Learning: A Meta-Learning Approach [28.281166755509886]
In Federated Learning, we aim to train models across multiple computing units (users)
In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data.
arXiv Detail & Related papers (2020-02-19T01:08:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.