A Data-Driven Defense against Edge-case Model Poisoning Attacks on Federated Learning
- URL: http://arxiv.org/abs/2305.02022v2
- Date: Wed, 14 Aug 2024 13:37:29 GMT
- Title: A Data-Driven Defense against Edge-case Model Poisoning Attacks on Federated Learning
- Authors: Kiran Purohit, Soumi Das, Sourangshu Bhattacharya, Santu Rana,
- Abstract summary: We propose an effective defense against model poisoning attacks in Federated Learning systems.
DataDefense learns a poisoned data detector model which marks each example in the defense dataset as poisoned or clean.
It is able to reduce the attack success rate by at least 40% on standard attack setups and by more than 80% on some setups.
- Score: 13.89043799280729
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning systems are increasingly subjected to a multitude of model poisoning attacks from clients. Among these, edge-case attacks that target a small fraction of the input space are nearly impossible to detect using existing defenses, leading to a high attack success rate. We propose an effective defense using an external defense dataset, which provides information about the attack target. The defense dataset contains a mix of poisoned and clean examples, with only a few known to be clean. The proposed method, DataDefense, uses this dataset to learn a poisoned data detector model which marks each example in the defense dataset as poisoned or clean. It also learns a client importance model that estimates the probability of a client update being malicious. The global model is then updated as a weighted average of the client models' updates. The poisoned data detector and the client importance model parameters are updated using an alternating minimization strategy over the Federated Learning rounds. Extensive experiments on standard attack scenarios demonstrate that DataDefense can defend against model poisoning attacks where other state-of-the-art defenses fail. In particular, DataDefense is able to reduce the attack success rate by at least ~ 40% on standard attack setups and by more than 80% on some setups. Furthermore, DataDefense requires very few defense examples (as few as five) to achieve a near-optimal reduction in attack success rate.
Related papers
- Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks [11.390175856652856]
Clean-label attacks are a more stealthy form of backdoor attacks that can perform the attack without changing the labels of poisoned data.
We study different strategies for selectively poisoning a small set of training samples in the target class to boost the attack success rate.
Our threat model poses a serious threat in training machine learning models with third-party datasets.
arXiv Detail & Related papers (2024-07-15T15:38:21Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Poisoning Web-Scale Training Datasets is Practical [73.34964403079775]
We introduce two new dataset poisoning attacks that intentionally introduce malicious examples to a model's performance.
First attack, split-view poisoning, exploits the mutable nature of internet content to ensure a dataset annotator's initial view of the dataset differs from the view downloaded by subsequent clients.
Second attack, frontrunning poisoning, targets web-scale datasets that periodically snapshot crowd-sourced content.
arXiv Detail & Related papers (2023-02-20T18:30:54Z) - Defending against the Label-flipping Attack in Federated Learning [5.769445676575767]
Federated learning (FL) provides autonomy and privacy by design to participating peers.
The label-flipping (LF) attack is a targeted poisoning attack where the attackers poison their training data by flipping the labels of some examples.
We propose a novel defense that first dynamically extracts those gradients from the peers' local updates.
arXiv Detail & Related papers (2022-07-05T12:02:54Z) - Autoregressive Perturbations for Data Poisoning [54.205200221427994]
Data scraping from social media has led to growing concerns regarding unauthorized use of data.
Data poisoning attacks have been proposed as a bulwark against scraping.
We introduce autoregressive (AR) poisoning, a method that can generate poisoned data without access to the broader dataset.
arXiv Detail & Related papers (2022-06-08T06:24:51Z) - Influence Based Defense Against Data Poisoning Attacks in Online
Learning [9.414651358362391]
Data poisoning is an attack where an attacker manipulates a fraction of data to degrade the performance of machine learning model.
We propose a defense mechanism to minimize the degradation caused by the poisoned training data on a learner's model in an online setup.
arXiv Detail & Related papers (2021-04-24T08:39:13Z) - SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics [44.487762480349765]
A small fraction of poisoned data changes the behavior of a trained model when triggered by an attacker-specified watermark.
We propose a novel defense algorithm using robust covariance estimation to amplify the spectral signature of corrupted data.
arXiv Detail & Related papers (2021-04-22T20:49:40Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.