Revisiting Weighted Aggregation in Federated Learning with Neural
Networks
- URL: http://arxiv.org/abs/2302.10911v4
- Date: Mon, 12 Jun 2023 14:19:53 GMT
- Title: Revisiting Weighted Aggregation in Federated Learning with Neural
Networks
- Authors: Zexi Li, Tao Lin, Xinyi Shang, Chao Wu
- Abstract summary: In federated learning (FL), weighted aggregation of local models is conducted to generate a global model.
We find that the sum of weights can be smaller than 1, causing global weight shrinking effect and improving generalization.
We propose an effective method for Federated Learning with Learnable aggregation weights, named as FedLAW.
- Score: 5.779987217952073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In federated learning (FL), weighted aggregation of local models is conducted
to generate a global model, and the aggregation weights are normalized (the sum
of weights is 1) and proportional to the local data sizes. In this paper, we
revisit the weighted aggregation process and gain new insights into the
training dynamics of FL. First, we find that the sum of weights can be smaller
than 1, causing global weight shrinking effect (analogous to weight decay) and
improving generalization. We explore how the optimal shrinking factor is
affected by clients' data heterogeneity and local epochs. Second, we dive into
the relative aggregation weights among clients to depict the clients'
importance. We develop client coherence to study the learning dynamics and find
a critical point that exists. Before entering the critical point, more coherent
clients play more essential roles in generalization. Based on the above
insights, we propose an effective method for Federated Learning with Learnable
Aggregation Weights, named as FedLAW. Extensive experiments verify that our
method can improve the generalization of the global model by a large margin on
different datasets and models.
Related papers
- FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients [13.98392319567057]
Federated Learning (FL) is a distributed machine learning paradigm that achieves a globally robust model through decentralized computation and periodic model synthesis.
Despite their wide adoption, existing FL and PFL works have yet to comprehensively address the class-imbalance issue.
We propose FedReMa, an efficient PFL algorithm that can tackle class-imbalance by utilizing an adaptive inter-client co-learning approach.
arXiv Detail & Related papers (2024-11-04T05:44:28Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - An Element-Wise Weights Aggregation Method for Federated Learning [11.9232569348563]
This paper introduces an innovative Element-Wise Weights Aggregation Method for Federated Learning (EWWA-FL)
EWWA-FL aggregates local weights to the global model at the level of individual elements, allowing each participating client to make element-wise contributions to the learning process.
By taking into account the unique dataset characteristics of each client, EWWA-FL enhances the robustness of the global model to different datasets.
arXiv Detail & Related papers (2024-04-24T15:16:06Z) - FedImpro: Measuring and Improving Client Update in Federated Learning [77.68805026788836]
Federated Learning (FL) models often experience client drift caused by heterogeneous data.
We present an alternative perspective on client drift and aim to mitigate it by generating improved local models.
arXiv Detail & Related papers (2024-02-10T18:14:57Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Aggregation Weighting of Federated Learning via Generalization Bound
Estimation [65.8630966842025]
Federated Learning (FL) typically aggregates client model parameters using a weighting approach determined by sample proportions.
We replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model.
arXiv Detail & Related papers (2023-11-10T08:50:28Z) - Federated Learning for Semantic Parsing: Task Formulation, Evaluation
Setup, New Algorithms [29.636944156801327]
Multiple clients collaboratively train one global model without sharing their semantic parsing data.
Lorar adjusts each client's contribution to the global model update based on its training loss reduction during each round.
Clients with smaller datasets enjoy larger performance gains.
arXiv Detail & Related papers (2023-05-26T19:25:49Z) - FewFedWeight: Few-shot Federated Learning Framework across Multiple NLP
Tasks [38.68736962054861]
FewFedWeight is a few-shot federated learning framework across multiple tasks.
It trains client models in isolated devices without sharing data.
It can significantly improve the performance of client models on 61% tasks with an average performance improvement rate of 30.5% over the baseline.
arXiv Detail & Related papers (2022-12-16T09:01:56Z) - Closing the Gap between Client and Global Model Performance in
Heterogeneous Federated Learning [2.1044900734651626]
We show how the chosen approach for training custom client models has an impact on the global model.
We propose a new approach that combines KD and Learning without Forgetting (LwoF) to produce improved personalised models.
arXiv Detail & Related papers (2022-11-07T11:12:57Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.