Keep It Simple: Fault Tolerance Evaluation of Federated Learning with
Unreliable Clients
- URL: http://arxiv.org/abs/2305.09856v1
- Date: Tue, 16 May 2023 23:55:47 GMT
- Title: Keep It Simple: Fault Tolerance Evaluation of Federated Learning with
Unreliable Clients
- Authors: Victoria Huang, Shaleeza Sohail, Michael Mayo, Tania Lorido Botran,
Mark Rodrigues, Chris Anderson, Melanie Ooi
- Abstract summary: Federated learning (FL) enables decentralized model training across multiple devices without exposing their local training data.
We show that simple FL algorithms can perform surprisingly well in the presence of unreliable clients.
- Score: 0.28939699256527274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL), as an emerging artificial intelligence (AI)
approach, enables decentralized model training across multiple devices without
exposing their local training data. FL has been increasingly gaining popularity
in both academia and industry. While research works have been proposed to
improve the fault tolerance of FL, the real impact of unreliable devices (e.g.,
dropping out, misconfiguration, poor data quality) in real-world applications
is not fully investigated. We carefully chose two representative, real-world
classification problems with a limited numbers of clients to better analyze FL
fault tolerance. Contrary to the intuition, simple FL algorithms can perform
surprisingly well in the presence of unreliable clients.
Related papers
- Formal Logic-guided Robust Federated Learning against Poisoning Attacks [6.997975378492098]
Federated Learning (FL) offers a promising solution to the privacy concerns associated with centralized Machine Learning (ML)
FL is vulnerable to various security threats, including poisoning attacks, where adversarial clients manipulate the training data or model updates to degrade overall model performance.
We present a defense mechanism designed to mitigate poisoning attacks in federated learning for time-series tasks.
arXiv Detail & Related papers (2024-11-05T16:23:19Z) - Fast-FedUL: A Training-Free Federated Unlearning with Provable Skew Resilience [26.647028483763137]
We introduce Fast-FedUL, a tailored unlearning method for Federated Learning (FL)
We develop an algorithm to systematically remove the impact of the target client from the trained model.
Experimental results indicate that Fast-FedUL effectively removes almost all traces of the target client, while retaining the knowledge of untargeted clients.
arXiv Detail & Related papers (2024-05-28T10:51:38Z) - Fairness-Aware Client Selection for Federated Learning [13.781019191483864]
Federated learning (FL) has enabled multiple data owners (a.k.a. FL clients) to train machine learning models collaboratively without revealing private data.
Since the FL server can only engage a limited number of clients in each training round, FL client selection has become an important research problem.
We propose the Fairness-aware Federated Client Selection (FairFedCS) approach. Based on Lyapunov optimization, it dynamically adjusts FL clients' selection probabilities by jointly considering their reputations, times of participation in FL tasks and contributions to the resulting model performance.
arXiv Detail & Related papers (2023-07-20T10:04:55Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - On the Importance and Applicability of Pre-Training for Federated
Learning [28.238484580662785]
We conduct a systematic study to explore pre-training for federated learning.
We find that pre-training can improve FL, but also close its accuracy gap to the counterpart centralized learning.
We conclude our paper with an attempt to understand the effect of pre-training on FL.
arXiv Detail & Related papers (2022-06-23T06:02:33Z) - Combating Client Dropout in Federated Learning via Friend Model
Substitution [8.325089307976654]
Federated learning (FL) is a new distributed machine learning framework known for its benefits on data privacy and communication efficiency.
This paper studies a passive partial client participation scenario that is much less well understood.
We develop a new algorithm FL-FDMS that discovers friends of clients whose data distributions are similar.
Experiments on MNIST and CIFAR-10 confirmed the superior performance of FL-FDMS in handling client dropout in FL.
arXiv Detail & Related papers (2022-05-26T08:34:28Z) - FL Games: A federated learning framework for distribution shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL Games, a game-theoretic framework for federated learning for learning causal features that are invariant across clients.
arXiv Detail & Related papers (2022-05-23T07:51:45Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.