Byzantine-Robust Federated Learning via Credibility Assessment on
Non-IID Data
- URL: http://arxiv.org/abs/2109.02396v1
- Date: Mon, 6 Sep 2021 12:18:02 GMT
- Title: Byzantine-Robust Federated Learning via Credibility Assessment on
Non-IID Data
- Authors: Kun Zhai and Qiang Ren and Junli Wang and Chungang Yan
- Abstract summary: Federated learning is a novel framework that enables resource-constrained edge devices to jointly learn a model.
Standard federated learning is vulnerable to Byzantine attacks.
We propose a Byzantine-robust framework for federated learning via credibility assessment on non-iid data.
- Score: 1.4146420810689422
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a novel framework that enables resource-constrained
edge devices to jointly learn a model, which solves the problem of data
protection and data islands. However, standard federated learning is vulnerable
to Byzantine attacks, which will cause the global model to be manipulated by
the attacker or fail to converge. On non-iid data, the current methods are not
effective in defensing against Byzantine attacks. In this paper, we propose a
Byzantine-robust framework for federated learning via credibility assessment on
non-iid data (BRCA). Credibility assessment is designed to detect Byzantine
attacks by combing adaptive anomaly detection model and data verification.
Specially, an adaptive mechanism is incorporated into the anomaly detection
model for the training and prediction of the model. Simultaneously, a unified
update algorithm is given to guarantee that the global model has a consistent
direction. On non-iid data, our experiments demonstrate that the BRCA is more
robust to Byzantine attacks compared with conventional methods
Related papers
- Towards Trustworthy Web Attack Detection: An Uncertainty-Aware Ensemble Deep Kernel Learning Model [4.791983040541727]
We propose an Uncertainty-aware Ensemble Deep Kernel Learning (UEDKL) model to detect web attacks.
The proposed UEDKL utilizes a deep kernel learning model to distinguish normal HTTP requests from different types of web attacks.
Experiments on BDCI and SRBH datasets demonstrated that the proposed UEDKL framework yields significant improvement in both web attack detection performance and uncertainty estimation quality.
arXiv Detail & Related papers (2024-10-10T08:47:55Z) - Defending against Data Poisoning Attacks in Federated Learning via User Elimination [0.0]
This paper introduces a novel framework focused on the strategic elimination of adversarial users within a federated model.
We detect anomalies in the aggregation phase of the Federated Algorithm, by integrating metadata gathered by the local training instances with Differential Privacy techniques.
Our experiments demonstrate the efficacy of our methods, significantly mitigating the risk of data poisoning while maintaining user privacy and model performance.
arXiv Detail & Related papers (2024-04-19T10:36:00Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - SureFED: Robust Federated Learning via Uncertainty-Aware Inward and
Outward Inspection [29.491675102478798]
We introduce SureFED, a novel framework for robust federated learning.
SureFED establishes trust using the local information of benign clients.
We theoretically prove the robustness of our algorithm against data and model poisoning attacks.
arXiv Detail & Related papers (2023-08-04T23:51:05Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - An Experimental Study of Byzantine-Robust Aggregation Schemes in
Federated Learning [4.627944480085717]
Byzantine-robust federated learning aims at mitigating Byzantine failures during the federated training process.
Several robust aggregation schemes have been proposed to defend against malicious updates from Byzantine clients.
We conduct an experimental study of Byzantine-robust aggregation schemes under different attacks using two popular algorithms in federated learning.
arXiv Detail & Related papers (2023-02-14T16:36:38Z) - Delving into Data: Effectively Substitute Training for Black-box Attack [84.85798059317963]
We propose a novel perspective substitute training that focuses on designing the distribution of data used in the knowledge stealing process.
The combination of these two modules can further boost the consistency of the substitute model and target model, which greatly improves the effectiveness of adversarial attack.
arXiv Detail & Related papers (2021-04-26T07:26:29Z) - FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data
Commitment for Federated Learning [8.165895353387853]
Federated learning allows multiple clients to collaboratively train statistical models without disclosing private data.
There may exist Byzantine workers launching data poisoning and model poisoning attacks.
Most of the existing Byzantine-robust FL schemes are either ineffective against several advanced poisoning attacks or need to centralize a public validation dataset.
We propose FedCom, a novel Byzantine-robust federated learning framework by incorporating the idea of commitment from cryptography.
arXiv Detail & Related papers (2021-04-16T10:29:26Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.