Delving into the Adversarial Robustness of Federated Learning
- URL: http://arxiv.org/abs/2302.09479v1
- Date: Sun, 19 Feb 2023 04:54:25 GMT
- Title: Delving into the Adversarial Robustness of Federated Learning
- Authors: Jie Zhang, Bo Li, Chen Chen, Lingjuan Lyu, Shuang Wu, Shouhong Ding,
Chao Wu
- Abstract summary: In Federated Learning (FL), models are as fragile as centrally trained models against adversarial examples.
We propose a novel algorithm called Decision Boundary based Federated Adversarial Training (DBFAT) to improve both accuracy and robustness of FL systems.
- Score: 41.409961662754405
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In Federated Learning (FL), models are as fragile as centrally trained models
against adversarial examples. However, the adversarial robustness of federated
learning remains largely unexplored. This paper casts light on the challenge of
adversarial robustness of federated learning. To facilitate a better
understanding of the adversarial vulnerability of the existing FL methods, we
conduct comprehensive robustness evaluations on various attacks and adversarial
training methods. Moreover, we reveal the negative impacts induced by directly
adopting adversarial training in FL, which seriously hurts the test accuracy,
especially in non-IID settings. In this work, we propose a novel algorithm
called Decision Boundary based Federated Adversarial Training (DBFAT), which
consists of two components (local re-weighting and global regularization) to
improve both accuracy and robustness of FL systems. Extensive experiments on
multiple datasets demonstrate that DBFAT consistently outperforms other
baselines under both IID and non-IID settings.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - TPFL: A Trustworthy Personalized Federated Learning Framework via Subjective Logic [13.079535924498977]
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy.
Most FL approaches focusing solely on privacy protection fall short in scenarios where trustworthiness is crucial.
We introduce Trustworthy Personalized Federated Learning framework designed for classification tasks via subjective logic.
arXiv Detail & Related papers (2024-10-16T07:33:29Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - Relaxed Contrastive Learning for Federated Learning [48.96253206661268]
We propose a novel contrastive learning framework to address the challenges of data heterogeneity in federated learning.
Our framework outperforms all existing federated learning approaches by huge margins on the standard benchmarks.
arXiv Detail & Related papers (2024-01-10T04:55:24Z) - Understanding Adversarial Transferability in Federated Learning [16.204192821886927]
We investigate the robustness and security issues from a novel and practical setting.
A group of malicious clients has impacted the model during training by disguising their identities and acting as benign clients.
Our aim is to offer a full understanding of the challenges the FL system faces in this practical setting.
arXiv Detail & Related papers (2023-10-01T08:35:46Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Certifiably-Robust Federated Adversarial Learning via Randomized
Smoothing [16.528628447356496]
In this paper, we incorporate smoothing techniques into federated adversarial training to enable data-private distributed learning.
Our experiments show that such an advanced federated adversarial learning framework can deliver models as robust as those trained by the centralized training.
arXiv Detail & Related papers (2021-03-30T02:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.