Federated Learning Aggregation: New Robust Algorithms with Guarantees
- URL: http://arxiv.org/abs/2205.10864v1
- Date: Sun, 22 May 2022 16:37:53 GMT
- Title: Federated Learning Aggregation: New Robust Algorithms with Guarantees
- Authors: Adnan Ben Mansour, Gaia Carenini, Alexandre Duplessis and David
Naccache
- Abstract summary: Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
- Score: 63.96013144017572
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning has been recently proposed for distributed model training
at the edge. The principle of this approach is to aggregate models learned on
distributed clients to obtain a new more general "average" model (FedAvg). The
resulting model is then redistributed to clients for further training. To date,
the most popular federated learning algorithm uses coordinate-wise averaging of
the model parameters for aggregation. In this paper, we carry out a complete
general mathematical convergence analysis to evaluate aggregation strategies in
a federated learning framework. From this, we derive novel aggregation
algorithms which are able to modify their model architecture by differentiating
client contributions according to the value of their losses. Moreover, we go
beyond the assumptions introduced in theory, by evaluating the performance of
these strategies and by comparing them with the one of FedAvg in classification
tasks in both the IID and the Non-IID framework without additional hypothesis.
Related papers
- Vanishing Variance Problem in Fully Decentralized Neural-Network Systems [0.8212195887472242]
Federated learning and gossip learning are emerging methodologies designed to mitigate data privacy concerns.
Our research introduces a variance-corrected model averaging algorithm.
Our simulation results demonstrate that our approach enables gossip learning to achieve convergence efficiency comparable to that of federated learning.
arXiv Detail & Related papers (2024-04-06T12:49:20Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Aggregation Weighting of Federated Learning via Generalization Bound
Estimation [65.8630966842025]
Federated Learning (FL) typically aggregates client model parameters using a weighting approach determined by sample proportions.
We replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model.
arXiv Detail & Related papers (2023-11-10T08:50:28Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - A Federated Learning Aggregation Algorithm for Pervasive Computing:
Evaluation and Comparison [0.6299766708197883]
Pervasive computing promotes the installation of connected devices in our living spaces in order to provide services.
Two major developments have gained significant momentum recently: an advanced use of edge resources and the integration of machine learning techniques for engineering applications.
We propose a novel aggregation algorithm, termed FedDist, which is able to modify its model architecture by identifying dissimilarities between specific neurons amongst the clients.
arXiv Detail & Related papers (2021-10-19T19:43:28Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Struct-MMSB: Mixed Membership Stochastic Blockmodels with Interpretable
Structured Priors [13.712395104755783]
Mixed membership blockmodel (MMSB) is a popular framework for community detection and network generation.
We present a flexible MMSB model, textitStruct-MMSB, that uses a recently developed statistical relational learning model, hinge-loss Markov random fields (HL-MRFs)
Our model is capable of learning latent characteristics in real-world networks via meaningful latent variables encoded as a complex combination of observed features and membership distributions.
arXiv Detail & Related papers (2020-02-21T19:32:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.