Dynamic Defense Against Byzantine Poisoning Attacks in Federated
Learning
- URL: http://arxiv.org/abs/2007.15030v2
- Date: Thu, 24 Feb 2022 16:01:44 GMT
- Title: Dynamic Defense Against Byzantine Poisoning Attacks in Federated
Learning
- Authors: Nuria Rodr\'iguez-Barroso, Eugenio Mart\'inez-C\'amara, M. Victoria
Luz\'on, Francisco Herrera
- Abstract summary: Federated learning is vulnerable to Byzatine poisoning adversarial attacks.
We propose a dynamic aggregation operator that dynamically discards those adversarial clients.
The results show that the dynamic selection of the clients to aggregate enhances the performance of the global learning model.
- Score: 11.117880929232575
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning, as a distributed learning that conducts the training on
the local devices without accessing to the training data, is vulnerable to
Byzatine poisoning adversarial attacks. We argue that the federated learning
model has to avoid those kind of adversarial attacks through filtering out the
adversarial clients by means of the federated aggregation operator. We propose
a dynamic federated aggregation operator that dynamically discards those
adversarial clients and allows to prevent the corruption of the global learning
model. We assess it as a defense against adversarial attacks deploying a deep
learning classification model in a federated learning setting on the Fed-EMNIST
Digits, Fashion MNIST and CIFAR-10 image datasets. The results show that the
dynamic selection of the clients to aggregate enhances the performance of the
global learning model and discards the adversarial and poor (with low quality
models) clients.
Related papers
- Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - FLCert: Provably Secure Federated Learning against Poisoning Attacks [67.8846134295194]
We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
arXiv Detail & Related papers (2022-10-02T17:50:04Z) - Characterizing Internal Evasion Attacks in Federated Learning [12.873984200814533]
Federated learning allows for clients to jointly train a machine learning model.
Clients' models are vulnerable to attacks during the training and testing phases.
In this paper, we address the issue of adversarial clients performing "internal evasion attacks"
arXiv Detail & Related papers (2022-09-17T21:46:38Z) - Network-Level Adversaries in Federated Learning [21.222645649379672]
We study the impact of network-level adversaries on training federated learning models.
We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population.
We develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy.
arXiv Detail & Related papers (2022-08-27T02:42:04Z) - Certified Federated Adversarial Training [3.474871319204387]
We tackle the scenario of securing FL systems conducting adversarial training when a quorum of workers could be completely malicious.
We model an attacker who poisons the model to insert a weakness into the adversarial training such that the model displays apparent adversarial robustness.
We show that this defence can preserve adversarial robustness even against an adaptive attacker.
arXiv Detail & Related papers (2021-12-20T13:40:20Z) - RobustFed: A Truth Inference Approach for Robust Federated Learning [9.316565110931743]
Federated learning is a framework that enables clients to train a collaboratively global model under a central server's orchestration.
The aggregation step in federated learning is vulnerable to adversarial attacks as the central server cannot manage clients' behavior.
We propose a novel robust aggregation algorithm inspired by the truth inference methods in crowdsourcing.
arXiv Detail & Related papers (2021-07-18T09:34:57Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.