Towards Understanding Quality Challenges of the Federated Learning: A
First Look from the Lens of Robustness
- URL: http://arxiv.org/abs/2201.01409v1
- Date: Wed, 5 Jan 2022 02:06:39 GMT
- Title: Towards Understanding Quality Challenges of the Federated Learning: A
First Look from the Lens of Robustness
- Authors: Amin Eslami Abyane, Derui Zhu, Roberto Medeiros de Souza, Lei Ma, Hadi
Hemmati
- Abstract summary: Federated learning (FL) aims to preserve users' data privacy while leveraging the entire dataset of all participants for training.
FL still tends to suffer from quality issues such as attacks or byzantine faults.
This paper investigates the effectiveness of state-of-the-art (SOTA) robust FL techniques in the presence of attacks and faults.
- Score: 4.822471415125479
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a widely adopted distributed learning paradigm in
practice, which intends to preserve users' data privacy while leveraging the
entire dataset of all participants for training. In FL, multiple models are
trained independently on the users and aggregated centrally to update a global
model in an iterative process. Although this approach is excellent at
preserving privacy by design, FL still tends to suffer from quality issues such
as attacks or byzantine faults. Some recent attempts have been made to address
such quality challenges on the robust aggregation techniques for FL. However,
the effectiveness of state-of-the-art (SOTA) robust FL techniques is still
unclear and lacks a comprehensive study. Therefore, to better understand the
current quality status and challenges of these SOTA FL techniques in the
presence of attacks and faults, in this paper, we perform a large-scale
empirical study to investigate the SOTA FL's quality from multiple angles of
attacks, simulated faults (via mutation operators), and aggregation (defense)
methods. In particular, we perform our study on two generic image datasets and
one real-world federated medical image dataset. We also systematically
investigate the effect of the distribution of attacks/faults over users and the
independent and identically distributed (IID) factors, per dataset, on the
robustness results. After a large-scale analysis with 496 configurations, we
find that most mutators on each individual user have a negligible effect on the
final model. Moreover, choosing the most robust FL aggregator depends on the
attacks and datasets. Finally, we illustrate that it is possible to achieve a
generic solution that works almost as well or even better than any single
aggregator on all attacks and configurations with a simple ensemble model of
aggregators.
Related papers
- Formal Logic-guided Robust Federated Learning against Poisoning Attacks [6.997975378492098]
Federated Learning (FL) offers a promising solution to the privacy concerns associated with centralized Machine Learning (ML)
FL is vulnerable to various security threats, including poisoning attacks, where adversarial clients manipulate the training data or model updates to degrade overall model performance.
We present a defense mechanism designed to mitigate poisoning attacks in federated learning for time-series tasks.
arXiv Detail & Related papers (2024-11-05T16:23:19Z) - FedDr+: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning [27.782676760198697]
Federated Learning (FL) has emerged as a pivotal framework for the development of effective global models.
A key challenge in FL is client drift, where data heterogeneity impedes the aggregation of scattered knowledge.
We introduce a novel algorithm named FedDr+, which empowers local model alignment using dot-regression loss.
arXiv Detail & Related papers (2024-06-04T14:34:13Z) - FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - Exploiting Label Skews in Federated Learning with Model Concatenation [39.38427550571378]
Federated Learning (FL) has emerged as a promising solution to perform deep learning on different data owners without exchanging raw data.
Among different non-IID types, label skews have been challenging and common in image classification and other tasks.
We propose FedConcat, a simple and effective approach that degrades these local models as the base of the global model.
arXiv Detail & Related papers (2023-12-11T10:44:52Z) - Contrastive encoder pre-training-based clustered federated learning for
heterogeneous data [17.580390632874046]
Federated learning (FL) enables distributed clients to collaboratively train a global model while preserving their data privacy.
We propose contrastive pre-training-based clustered federated learning (CP-CFL) to improve the model convergence and overall performance of FL systems.
arXiv Detail & Related papers (2023-11-28T05:44:26Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Multi-Center Federated Learning [62.32725938999433]
Federated learning (FL) can protect data privacy in distributed learning.
It merely collects local gradients from users without access to their data.
We propose a novel multi-center aggregation mechanism.
arXiv Detail & Related papers (2021-08-19T12:20:31Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.