Deep Unfolding-based Weighted Averaging for Federated Learning in
Heterogeneous Environments
- URL: http://arxiv.org/abs/2212.12191v2
- Date: Mon, 28 Aug 2023 06:54:12 GMT
- Title: Deep Unfolding-based Weighted Averaging for Federated Learning in
Heterogeneous Environments
- Authors: Ayano Nakai-Kasai and Tadashi Wadayama
- Abstract summary: Federated learning is a collaborative model training method that iterates model updates by multiple clients and aggregation of the updates by a central server.
To adjust the aggregation weights, this paper employs deep unfolding, which is known as the parameter tuning method.
The proposed method can handle large-scale learning models with the aid of pretrained models such as it can perform practical real-world tasks.
- Score: 11.023081396326507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a collaborative model training method that iterates
model updates by multiple clients and aggregation of the updates by a central
server. Device and statistical heterogeneity of participating clients cause
significant performance degradation so that an appropriate aggregation weight
should be assigned to each client in the aggregation phase of the server. To
adjust the aggregation weights, this paper employs deep unfolding, which is
known as the parameter tuning method that leverages both learning capability
using training data like deep learning and domain knowledge. This enables us to
directly incorporate the heterogeneity of the environment of interest into the
tuning of the aggregation weights. The proposed approach can be combined with
various federated learning algorithms. The results of numerical experiments
indicate that a higher test accuracy for unknown class-balanced data can be
obtained with the proposed method than that with conventional heuristic
weighting methods. The proposed method can handle large-scale learning models
with the aid of pretrained models such that it can perform practical real-world
tasks. Convergence rate of federated learning algorithms with the proposed
method is also provided in this paper.
Related papers
- Dual-Criterion Model Aggregation in Federated Learning: Balancing Data Quantity and Quality [0.0]
Federated learning (FL) has become one of the key methods for privacy-preserving collaborative learning.
An aggregation algorithm is recognized as one of the most crucial components for ensuring the efficacy and security of the system.
This study proposes a novel dual-criterion weighted aggregation algorithm involving the quantity and quality of data from the client node.
arXiv Detail & Related papers (2024-11-12T14:09:16Z) - FedECADO: A Dynamical System Model of Federated Learning [15.425099636035108]
Federated learning harnesses the power of distributed optimization to train a unified machine learning model across separate clients.
This work proposes FedECADO, a new algorithm inspired by a dynamical system representation of the federated learning process.
Compared to prominent techniques, including FedProx and FedNova, FedECADO achieves higher classification accuracies in numerous heterogeneous scenarios.
arXiv Detail & Related papers (2024-10-13T17:26:43Z) - Vanishing Variance Problem in Fully Decentralized Neural-Network Systems [0.8212195887472242]
Federated learning and gossip learning are emerging methodologies designed to mitigate data privacy concerns.
Our research introduces a variance-corrected model averaging algorithm.
Our simulation results demonstrate that our approach enables gossip learning to achieve convergence efficiency comparable to that of federated learning.
arXiv Detail & Related papers (2024-04-06T12:49:20Z) - DA-PFL: Dynamic Affinity Aggregation for Personalized Federated Learning [13.393529840544117]
Existing personalized federated learning models prefer to aggregate similar clients with similar data distribution to improve the performance of learning models.
We propose a novel Dynamic Affinity-based Personalized Federated Learning model (DA-PFL) to alleviate the class imbalanced problem.
arXiv Detail & Related papers (2024-03-14T11:12:10Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Federated Learning Aggregation: New Robust Algorithms with Guarantees [63.96013144017572]
Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
arXiv Detail & Related papers (2022-05-22T16:37:53Z) - Merging Models with Fisher-Weighted Averaging [24.698591753644077]
We introduce a fundamentally different method for transferring knowledge across models that amounts to "merging" multiple models into one.
Our approach effectively involves computing a weighted average of the models' parameters.
We show that our merging procedure makes it possible to combine models in previously unexplored ways.
arXiv Detail & Related papers (2021-11-18T17:59:35Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.