A Survey on Decentralized Federated Learning
- URL: http://arxiv.org/abs/2308.04604v1
- Date: Tue, 8 Aug 2023 22:07:15 GMT
- Title: A Survey on Decentralized Federated Learning
- Authors: Edoardo Gabrielli, Giovanni Pica, Gabriele Tolomei
- Abstract summary: In recent years, federated learning has become a popular paradigm for training distributed, large-scale, and privacy-preserving machine learning (ML) systems.
In a typical FL system, the central server acts only as an orchestrator; it iteratively gathers and aggregates all the local models trained by each client on its private data until convergence.
One of the most critical challenges is to overcome the centralized orchestration of the classical FL client-server architecture.
Decentralized FL solutions have emerged where all FL clients cooperate and communicate without a central server.
- Score: 0.709016563801433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, federated learning (FL) has become a very popular paradigm
for training distributed, large-scale, and privacy-preserving machine learning
(ML) systems. In contrast to standard ML, where data must be collected at the
exact location where training is performed, FL takes advantage of the
computational capabilities of millions of edge devices to collaboratively train
a shared, global model without disclosing their local private data.
Specifically, in a typical FL system, the central server acts only as an
orchestrator; it iteratively gathers and aggregates all the local models
trained by each client on its private data until convergence. Although FL
undoubtedly has several benefits over traditional ML (e.g., it protects private
data ownership by design), it suffers from several weaknesses. One of the most
critical challenges is to overcome the centralized orchestration of the
classical FL client-server architecture, which is known to be vulnerable to
single-point-of-failure risks and man-in-the-middle attacks, among others. To
mitigate such exposure, decentralized FL solutions have emerged where all FL
clients cooperate and communicate without a central server. This survey
comprehensively summarizes and reviews existing decentralized FL approaches
proposed in the literature. Furthermore, it identifies emerging challenges and
suggests promising research directions in this under-explored domain.
Related papers
- A Framework for testing Federated Learning algorithms using an edge-like environment [0.0]
Federated Learning (FL) is a machine learning paradigm in which many clients cooperatively train a single centralized model while keeping their data private and decentralized.
It is non-trivial to accurately evaluate the contributions of local models in global centralized model aggregation.
This is an example of a major challenge in FL, commonly known as data imbalance or class imbalance.
In this work, a framework is proposed and implemented to assess FL algorithms in a more easy and scalable way.
arXiv Detail & Related papers (2024-07-17T19:52:53Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Personalized Privacy-Preserving Framework for Cross-Silo Federated
Learning [0.0]
Federated learning (FL) is a promising decentralized deep learning (DL) framework that enables DL-based approaches trained collaboratively across clients without sharing private data.
In this paper, we propose a novel framework, namely Personalized Privacy-Preserving Federated Learning (PPPFL)
Our proposed framework outperforms multiple FL baselines on different datasets, including MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2023-02-22T07:24:08Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - Better Methods and Theory for Federated Learning: Compression, Client
Selection and Heterogeneity [0.0]
Federated learning (FL) is an emerging machine learning paradigm involving multiple clients, e.g., mobile phone devices, with an incentive to collaborate in solving a machine learning problem coordinated by a central server.
In this thesis, we identify several of these challenges and propose new methods and algorithms to address them, with the ultimate goal of enabling practical FL solutions supported with mathematically rigorous guarantees.
arXiv Detail & Related papers (2022-07-01T12:55:09Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Semi-Supervised Federated Learning with non-IID Data: Algorithm and
System Design [42.63120623012093]
Federated Learning (FL) allows edge devices (or clients) to keep data locally while simultaneously training a shared global model.
The distribution of the client's local training data is non-independent identically distributed (non-IID)
We present a robust semi-supervised FL system design, where the system aims to solve the problem of data availability and non-IID in FL.
arXiv Detail & Related papers (2021-10-26T03:41:48Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Continual Local Training for Better Initialization of Federated Models [14.289213162030816]
Federated learning (FL) refers to the learning paradigm that trains machine learning models directly in decentralized systems.
The popular FL algorithm emphFederated Averaging (FedAvg) suffers from weight divergence.
We propose the local continual training strategy to address this problem.
arXiv Detail & Related papers (2020-05-26T12:27:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.