Better Methods and Theory for Federated Learning: Compression, Client
Selection and Heterogeneity
- URL: http://arxiv.org/abs/2207.00392v1
- Date: Fri, 1 Jul 2022 12:55:09 GMT
- Title: Better Methods and Theory for Federated Learning: Compression, Client
Selection and Heterogeneity
- Authors: Samuel Horv\'ath
- Abstract summary: Federated learning (FL) is an emerging machine learning paradigm involving multiple clients, e.g., mobile phone devices, with an incentive to collaborate in solving a machine learning problem coordinated by a central server.
In this thesis, we identify several of these challenges and propose new methods and algorithms to address them, with the ultimate goal of enabling practical FL solutions supported with mathematically rigorous guarantees.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an emerging machine learning paradigm involving
multiple clients, e.g., mobile phone devices, with an incentive to collaborate
in solving a machine learning problem coordinated by a central server. FL was
proposed in 2016 by Kone\v{c}n\'{y} et al. and McMahan et al. as a viable
privacy-preserving alternative to traditional centralized machine learning
since, by construction, the training data points are decentralized and never
transferred by the clients to a central server. Therefore, to a certain degree,
FL mitigates the privacy risks associated with centralized data collection.
Unfortunately, optimization for FL faces several specific issues that
centralized optimization usually does not need to handle. In this thesis, we
identify several of these challenges and propose new methods and algorithms to
address them, with the ultimate goal of enabling practical FL solutions
supported with mathematically rigorous guarantees.
Related papers
- A Framework for testing Federated Learning algorithms using an edge-like environment [0.0]
Federated Learning (FL) is a machine learning paradigm in which many clients cooperatively train a single centralized model while keeping their data private and decentralized.
It is non-trivial to accurately evaluate the contributions of local models in global centralized model aggregation.
This is an example of a major challenge in FL, commonly known as data imbalance or class imbalance.
In this work, a framework is proposed and implemented to assess FL algorithms in a more easy and scalable way.
arXiv Detail & Related papers (2024-07-17T19:52:53Z) - Fantastyc: Blockchain-based Federated Learning Made Secure and Practical [0.7083294473439816]
Federated Learning is a decentralized framework that enables clients to collaboratively train a machine learning model under the orchestration of a central server without sharing their local data.
The centrality of this framework represents a point of failure which is addressed in literature by blockchain-based federated learning approaches.
We propose Fantastyc, a solution designed to address these challenges that have been never met together in the state of the art.
arXiv Detail & Related papers (2024-06-05T20:01:49Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - A Survey on Decentralized Federated Learning [0.709016563801433]
In recent years, federated learning has become a popular paradigm for training distributed, large-scale, and privacy-preserving machine learning (ML) systems.
In a typical FL system, the central server acts only as an orchestrator; it iteratively gathers and aggregates all the local models trained by each client on its private data until convergence.
One of the most critical challenges is to overcome the centralized orchestration of the classical FL client-server architecture.
Decentralized FL solutions have emerged where all FL clients cooperate and communicate without a central server.
arXiv Detail & Related papers (2023-08-08T22:07:15Z) - Multi-Tier Client Selection for Mobile Federated Learning Networks [13.809694368802827]
We propose a first-of-its-kind underlineSocially-aware underlineFederated underlineClient underlineSelection (SocFedCS) approach to minimize costs and train high-quality FL models.
SocFedCS enriches the candidate FL client pool by enabling data owners to propagate FL task information through their local networks of trust.
arXiv Detail & Related papers (2023-05-11T15:06:08Z) - Federated Gradient Matching Pursuit [17.695717854068715]
Traditional machine learning techniques require centralizing all training data on one server or data hub.
In particular, federated learning (FL) provides such a solution to learn a shared model while keeping training data at local clients.
We propose a novel algorithmic framework, federated gradient matching pursuit (FedGradMP), to solve the sparsity constrained minimization problem in the FL setting.
arXiv Detail & Related papers (2023-02-20T16:26:29Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Decentralized Personalized Federated Learning for Min-Max Problems [79.61785798152529]
This paper is the first to study PFL for saddle point problems encompassing a broader range of optimization problems.
We propose new algorithms to address this problem and provide a theoretical analysis of the smooth (strongly) convex-(strongly) concave saddle point problems.
Numerical experiments for bilinear problems and neural networks with adversarial noise demonstrate the effectiveness of the proposed methods.
arXiv Detail & Related papers (2021-06-14T10:36:25Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.