Ensemble Federated Adversarial Training with Non-IID data
- URL: http://arxiv.org/abs/2110.14814v1
- Date: Tue, 26 Oct 2021 03:55:20 GMT
- Title: Ensemble Federated Adversarial Training with Non-IID data
- Authors: Shuang Luo and Didi Zhu and Zexi Li and Chao Wu
- Abstract summary: Adversarial samples can confuse and cheat the client models to achieve malicious purposes.
We introduce a novel Ensemble Federated Adversarial Training Method, termed as EFAT.
Our proposed method achieves promising results compared with solely combining federated learning with adversarial approaches.
- Score: 1.5878082907673585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite federated learning endows distributed clients with a cooperative
training mode under the premise of protecting data privacy and security, the
clients are still vulnerable when encountering adversarial samples due to the
lack of robustness. The adversarial samples can confuse and cheat the client
models to achieve malicious purposes via injecting elaborate noise into normal
input. In this paper, we introduce a novel Ensemble Federated Adversarial
Training Method, termed as EFAT, that enables an efficacious and robust coupled
training mechanism. Our core idea is to enhance the diversity of adversarial
examples through expanding training data with different disturbances generated
from other participated clients, which helps adversarial training perform well
in Non-IID settings. Experimental results on different Non-IID situations,
including feature distribution skew and label distribution skew, show that our
proposed method achieves promising results compared with solely combining
federated learning with adversarial approaches.
Related papers
- Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - Reinforcement Learning as a Catalyst for Robust and Fair Federated
Learning: Deciphering the Dynamics of Client Contributions [6.318638597489423]
Reinforcement Federated Learning (RFL) is a novel framework that leverages deep reinforcement learning to adaptively optimize client contribution during aggregation.
In terms of robustness, RFL outperforms state-of-the-art methods, while maintaining comparable levels of fairness.
arXiv Detail & Related papers (2024-02-08T10:22:12Z) - Relaxed Contrastive Learning for Federated Learning [48.96253206661268]
We propose a novel contrastive learning framework to address the challenges of data heterogeneity in federated learning.
Our framework outperforms all existing federated learning approaches by huge margins on the standard benchmarks.
arXiv Detail & Related papers (2024-01-10T04:55:24Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Characterizing Internal Evasion Attacks in Federated Learning [12.873984200814533]
Federated learning allows for clients to jointly train a machine learning model.
Clients' models are vulnerable to attacks during the training and testing phases.
In this paper, we address the issue of adversarial clients performing "internal evasion attacks"
arXiv Detail & Related papers (2022-09-17T21:46:38Z) - Robust Transferable Feature Extractors: Learning to Defend Pre-Trained
Networks Against White Box Adversaries [69.53730499849023]
We show that adversarial examples can be successfully transferred to another independently trained model to induce prediction errors.
We propose a deep learning-based pre-processing mechanism, which we refer to as a robust transferable feature extractor (RTFE)
arXiv Detail & Related papers (2022-09-14T21:09:34Z) - Certifiably-Robust Federated Adversarial Learning via Randomized
Smoothing [16.528628447356496]
In this paper, we incorporate smoothing techniques into federated adversarial training to enable data-private distributed learning.
Our experiments show that such an advanced federated adversarial learning framework can deliver models as robust as those trained by the centralized training.
arXiv Detail & Related papers (2021-03-30T02:19:45Z) - Personalized Cross-Silo Federated Learning on Non-IID Data [62.68467223450439]
Non-IID data present a tough challenge for federated learning.
We propose a novel idea of pairwise collaborations between clients with similar data.
arXiv Detail & Related papers (2020-07-07T21:38:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.