Tackling Computational Heterogeneity in FL: A Few Theoretical Insights
- URL: http://arxiv.org/abs/2307.06283v1
- Date: Wed, 12 Jul 2023 16:28:21 GMT
- Title: Tackling Computational Heterogeneity in FL: A Few Theoretical Insights
- Authors: Adnan Ben Mansour, Gaia Carenini, Alexandre Duplessis
- Abstract summary: We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
- Score: 68.8204255655161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The future of machine learning lies in moving data collection along with
training to the edge. Federated Learning, for short FL, has been recently
proposed to achieve this goal. The principle of this approach is to aggregate
models learned over a large number of distributed clients, i.e.,
resource-constrained mobile devices that collect data from their environment,
to obtain a new more general model. The latter is subsequently redistributed to
clients for further training. A key feature that distinguishes federated
learning from data-center-based distributed training is the inherent
heterogeneity. In this work, we introduce and analyse a novel aggregation
framework that allows for formalizing and tackling computational heterogeneity
in federated optimization, in terms of both heterogeneous data and local
updates. Proposed aggregation algorithms are extensively analyzed from a
theoretical, and an experimental prospective.
Related papers
- Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Fed-QSSL: A Framework for Personalized Federated Learning under Bitwidth
and Data Heterogeneity [14.313847382199059]
Federated quantization-based self-supervised learning scheme (Fed-QSSL) designed to address heterogeneity in FL systems.
Fed-QSSL deploys de-quantization, weighted aggregation and re-quantization, ultimately creating models personalized to both data distribution and specific infrastructure of each client's device.
arXiv Detail & Related papers (2023-12-20T19:11:19Z) - Federated Learning Aggregation: New Robust Algorithms with Guarantees [63.96013144017572]
Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
arXiv Detail & Related papers (2022-05-22T16:37:53Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - A Federated Learning Aggregation Algorithm for Pervasive Computing:
Evaluation and Comparison [0.6299766708197883]
Pervasive computing promotes the installation of connected devices in our living spaces in order to provide services.
Two major developments have gained significant momentum recently: an advanced use of edge resources and the integration of machine learning techniques for engineering applications.
We propose a novel aggregation algorithm, termed FedDist, which is able to modify its model architecture by identifying dissimilarities between specific neurons amongst the clients.
arXiv Detail & Related papers (2021-10-19T19:43:28Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.