Mitigating Data Injection Attacks on Federated Learning
- URL: http://arxiv.org/abs/2312.02102v3
- Date: Sun, 14 Jan 2024 21:05:13 GMT
- Title: Mitigating Data Injection Attacks on Federated Learning
- Authors: Or Shalom, Amir Leshem, Waheed U. Bajwa
- Abstract summary: Federated learning is a technique that allows multiple entities to collaboratively train models using their data.
Despite its advantages, federated learning can be susceptible to false data injection attacks.
We propose a novel technique to detect and mitigate data injection attacks on federated learning systems.
- Score: 20.24380409762923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a technique that allows multiple entities to
collaboratively train models using their data without compromising data
privacy. However, despite its advantages, federated learning can be susceptible
to false data injection attacks. In these scenarios, a malicious entity with
control over specific agents in the network can manipulate the learning
process, leading to a suboptimal model. Consequently, addressing these data
injection attacks presents a significant research challenge in federated
learning systems. In this paper, we propose a novel technique to detect and
mitigate data injection attacks on federated learning systems. Our mitigation
method is a local scheme, performed during a single instance of training by the
coordinating node, allowing the mitigation during the convergence of the
algorithm. Whenever an agent is suspected to be an attacker, its data will be
ignored for a certain period, this decision will often be re-evaluated. We
prove that with probability 1, after a finite time, all attackers will be
ignored while the probability of ignoring a trustful agent becomes 0, provided
that there is a majority of truthful agents. Simulations show that when the
coordinating node detects and isolates all the attackers, the model recovers
and converges to the truthful model.
Related papers
- FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning
Few-Shot Meta-Learners [28.468089304148453]
We attack amortized meta-learners, which allows us to craft colluding sets of inputs that fool the system's learning algorithm.
We show that in a white box setting, these attacks are very successful and can cause the target model's predictions to become worse than chance.
We explore two hypotheses to explain this: 'overfitting' by the attack, and mismatch between the model on which the attack is generated and that to which the attack is transferred.
arXiv Detail & Related papers (2022-11-23T14:55:44Z) - DAD: Data-free Adversarial Defense at Test Time [21.741026088202126]
Deep models are highly susceptible to adversarial attacks.
Privacy has become an important concern, restricting access to only trained models but not the training data.
We propose a completely novel problem of 'test-time adversarial defense in absence of training data and even their statistics'
arXiv Detail & Related papers (2022-04-04T15:16:13Z) - Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets [53.866927712193416]
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak private details belonging to other parties.
Our attacks are effective across membership inference, attribute inference, and data extraction.
Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty protocols for machine learning.
arXiv Detail & Related papers (2022-03-31T18:06:28Z) - Accumulative Poisoning Attacks on Real-time Data [56.96241557830253]
We show that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
Our work validates that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
arXiv Detail & Related papers (2021-06-18T08:29:53Z) - Decentralized Federated Learning Preserves Model and Data Privacy [77.454688257702]
We propose a fully decentralized approach, which allows to share knowledge between trained models.
Students are trained on the output of their teachers via synthetically generated input data.
The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher.
arXiv Detail & Related papers (2021-02-01T14:38:54Z) - Untargeted Poisoning Attack Detection in Federated Learning via Behavior
Attestation [7.979659145328856]
Federated Learning (FL) is a paradigm in Machine Learning (ML) that addresses data privacy, security, access rights and access to heterogeneous information issues.
Despite its advantages, there is an increased potential for cyberattacks on FL-based ML techniques that can undermine the benefits.
We propose attestedFL, a defense mechanism that monitors the training of individual nodes through state persistence in order to detect a malicious worker.
arXiv Detail & Related papers (2021-01-24T20:52:55Z) - Mitigating the Impact of Adversarial Attacks in Very Deep Networks [10.555822166916705]
Deep Neural Network (DNN) models have vulnerabilities related to security concerns.
Data poisoning-enabled perturbation attacks are complex adversarial ones that inject false data into models.
We propose an attack-agnostic-based defense method for mitigating their influence.
arXiv Detail & Related papers (2020-12-08T21:25:44Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z) - Data Poisoning Attacks on Federated Machine Learning [34.48190607495785]
Federated machine learning enables resource constrained node devices to learn a shared model while keeping the training data local.
The communication protocol amongst different nodes could be exploited by attackers to launch data poisoning attacks.
We propose a novel systems-aware optimization method, ATTack on Federated Learning (AT2FL)
arXiv Detail & Related papers (2020-04-19T03:45:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.