Not All Poisons are Created Equal: Robust Training against Data
Poisoning
- URL: http://arxiv.org/abs/2210.09671v1
- Date: Tue, 18 Oct 2022 08:19:41 GMT
- Title: Not All Poisons are Created Equal: Robust Training against Data
Poisoning
- Authors: Yu Yang, Tian Yu Liu, Baharan Mirzasoleiman
- Abstract summary: Data poisoning causes misclassification of test time target examples by injecting maliciously crafted samples in the training data.
We propose an efficient defense mechanism that significantly reduces the success rate of various data poisoning attacks.
- Score: 15.761683760167777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data poisoning causes misclassification of test time target examples by
injecting maliciously crafted samples in the training data. Existing defenses
are often effective only against a specific type of targeted attack,
significantly degrade the generalization performance, or are prohibitive for
standard deep learning pipelines.
In this work, we propose an efficient defense mechanism that significantly
reduces the success rate of various data poisoning attacks, and provides
theoretical guarantees for the performance of the model. Targeted attacks work
by adding bounded perturbations to a randomly selected subset of training data
to match the targets' gradient or representation. We show that: (i) under
bounded perturbations, only a number of poisons can be optimized to have a
gradient that is close enough to that of the target and make the attack
successful; (ii) such effective poisons move away from their original class and
get isolated in the gradient space; (iii) dropping examples in low-density
gradient regions during training can successfully eliminate the effective
poisons, and guarantees similar training dynamics to that of training on full
data. Our extensive experiments show that our method significantly decreases
the success rate of state-of-the-art targeted attacks, including Gradient
Matching and Bullseye Polytope, and easily scales to large datasets.
Related papers
- Have You Poisoned My Data? Defending Neural Networks against Data Poisoning [0.393259574660092]
We propose a novel approach to detect and filter poisoned datapoints in the transfer learning setting.
We show that effective poisons can be successfully differentiated from clean points in the characteristic vector space.
Our evaluation shows that our proposal outperforms existing approaches in defense rate and final trained model performance.
arXiv Detail & Related papers (2024-03-20T11:50:16Z) - Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors [26.36344184385407]
In this paper, we explore the threat of indiscriminate attacks on downstream tasks that apply pre-trained feature extractors.
We propose two types of attacks: (1) the input space attacks, where we modify existing attacks to craft poisoned data in the input space; and (2) the feature targeted attacks, where we find poisoned features by treating the learned feature representations as a dataset.
Our experiments examine such attacks in popular downstream tasks of fine-tuning on the same dataset and transfer learning that considers domain adaptation.
arXiv Detail & Related papers (2024-02-20T01:12:59Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - HINT: Healthy Influential-Noise based Training to Defend against Data
Poisoning Attacks [12.929357709840975]
We propose an efficient and robust training approach to defend against data poisoning attacks based on influence functions.
Using influence functions, we craft healthy noise that helps to harden the classification model against poisoning attacks.
Our empirical results show that HINT can efficiently protect deep learning models against the effect of both untargeted and targeted poisoning attacks.
arXiv Detail & Related papers (2023-09-15T17:12:19Z) - On Practical Aspects of Aggregation Defenses against Data Poisoning
Attacks [58.718697580177356]
Attacks on deep learning models with malicious training samples are known as data poisoning.
Recent advances in defense strategies against data poisoning have highlighted the effectiveness of aggregation schemes in achieving certified poisoning robustness.
Here we focus on Deep Partition Aggregation, a representative aggregation defense, and assess its practical aspects, including efficiency, performance, and robustness.
arXiv Detail & Related papers (2023-06-28T17:59:35Z) - Towards Fair Classification against Poisoning Attacks [52.57443558122475]
We study the poisoning scenario where the attacker can insert a small fraction of samples into training data.
We propose a general and theoretically guaranteed framework which accommodates traditional defense methods to fair classification against poisoning attacks.
arXiv Detail & Related papers (2022-10-18T00:49:58Z) - Unreasonable Effectiveness of Last Hidden Layer Activations [0.5156484100374058]
We show that using some widely known activation functions in the output layer of the model with high temperature values has the effect of zeroing out the gradients for both targeted and untargeted attack cases.
We've experimentally verified the efficacy of our approach on MNIST (Digit), CIFAR10 datasets.
arXiv Detail & Related papers (2022-02-15T12:02:59Z) - Accumulative Poisoning Attacks on Real-time Data [56.96241557830253]
We show that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
Our work validates that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
arXiv Detail & Related papers (2021-06-18T08:29:53Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.