Understanding Adversarial Transferability in Federated Learning
- URL: http://arxiv.org/abs/2310.00616v1
- Date: Sun, 1 Oct 2023 08:35:46 GMT
- Title: Understanding Adversarial Transferability in Federated Learning
- Authors: Yijiang Li, Ying Gao and Haohan Wang
- Abstract summary: We investigate the robustness and security issues from a novel and practical setting.
A group of malicious clients has impacted the model during training by disguising their identities and acting as benign clients.
Our aim is to offer a full understanding of the challenges the FL system faces in this practical setting.
- Score: 16.204192821886927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the robustness and security issues from a novel and practical
setting: a group of malicious clients has impacted the model during training by
disguising their identities and acting as benign clients, and only revealing
their adversary position after the training to conduct transferable adversarial
attacks with their data, which is usually a subset of the data that FL system
is trained with. Our aim is to offer a full understanding of the challenges the
FL system faces in this practical setting across a spectrum of configurations.
We notice that such an attack is possible, but the federated model is more
robust compared with its centralized counterpart when the accuracy on clean
images is comparable. Through our study, we hypothesized the robustness is from
two factors: the decentralized training on distributed data and the averaging
operation. We provide evidence from both the perspective of empirical
experiments and theoretical analysis. Our work has implications for
understanding the robustness of federated learning systems and poses a
practical question for federated learning applications.
Related papers
- Formal Logic-guided Robust Federated Learning against Poisoning Attacks [6.997975378492098]
Federated Learning (FL) offers a promising solution to the privacy concerns associated with centralized Machine Learning (ML)
FL is vulnerable to various security threats, including poisoning attacks, where adversarial clients manipulate the training data or model updates to degrade overall model performance.
We present a defense mechanism designed to mitigate poisoning attacks in federated learning for time-series tasks.
arXiv Detail & Related papers (2024-11-05T16:23:19Z) - Adversarial Training Can Provably Improve Robustness: Theoretical Analysis of Feature Learning Process Under Structured Data [38.44734564565478]
We provide a theoretical understanding of adversarial examples and adversarial training algorithms from the perspective of feature learning theory.
We show that the adversarial training method can provably strengthen the robust feature learning and suppress the non-robust feature learning.
arXiv Detail & Related papers (2024-10-11T03:59:49Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Delving into the Adversarial Robustness of Federated Learning [41.409961662754405]
In Federated Learning (FL), models are as fragile as centrally trained models against adversarial examples.
We propose a novel algorithm called Decision Boundary based Federated Adversarial Training (DBFAT) to improve both accuracy and robustness of FL systems.
arXiv Detail & Related papers (2023-02-19T04:54:25Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Robust Transferable Feature Extractors: Learning to Defend Pre-Trained
Networks Against White Box Adversaries [69.53730499849023]
We show that adversarial examples can be successfully transferred to another independently trained model to induce prediction errors.
We propose a deep learning-based pre-processing mechanism, which we refer to as a robust transferable feature extractor (RTFE)
arXiv Detail & Related papers (2022-09-14T21:09:34Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z) - FR-Train: A Mutual Information-Based Approach to Fair and Robust
Training [33.385118640843416]
We propose FR-Train, which holistically performs fair and robust model training.
In our experiments, FR-Train shows almost no decrease in fairness and accuracy in the presence of data poisoning.
arXiv Detail & Related papers (2020-02-24T13:37:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.