Indiscriminate Data Poisoning Attacks on Neural Networks
- URL: http://arxiv.org/abs/2204.09092v2
- Date: Thu, 15 Feb 2024 16:57:43 GMT
- Title: Indiscriminate Data Poisoning Attacks on Neural Networks
- Authors: Yiwei Lu, Gautam Kamath, Yaoliang Yu
- Abstract summary: Data poisoning attacks aim to influence a model by injecting "poisoned" data into the training process.
We take a closer look at existing poisoning attacks and connect them with old and new algorithms for solving sequential Stackelberg games.
We present efficient implementations that exploit modern auto-differentiation packages and allow simultaneous and coordinated generation of poisoned points.
- Score: 28.09519873656809
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data poisoning attacks, in which a malicious adversary aims to influence a
model by injecting "poisoned" data into the training process, have attracted
significant recent attention. In this work, we take a closer look at existing
poisoning attacks and connect them with old and new algorithms for solving
sequential Stackelberg games. By choosing an appropriate loss function for the
attacker and optimizing with algorithms that exploit second-order information,
we design poisoning attacks that are effective on neural networks. We present
efficient implementations that exploit modern auto-differentiation packages and
allow simultaneous and coordinated generation of tens of thousands of poisoned
points, in contrast to existing methods that generate poisoned points one by
one. We further perform extensive experiments that empirically explore the
effect of data poisoning attacks on deep neural networks.
Related papers
- Have You Poisoned My Data? Defending Neural Networks against Data Poisoning [0.393259574660092]
We propose a novel approach to detect and filter poisoned datapoints in the transfer learning setting.
We show that effective poisons can be successfully differentiated from clean points in the characteristic vector space.
Our evaluation shows that our proposal outperforms existing approaches in defense rate and final trained model performance.
arXiv Detail & Related papers (2024-03-20T11:50:16Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - Sharpness-Aware Data Poisoning Attack [38.01535347191942]
Recent research has highlighted the vulnerability of Deep Neural Networks (DNNs) against data poisoning attacks.
We propose a novel attack method called ''Sharpness-Aware Data Poisoning Attack (SAPA)''
In particular, it leverages the concept of DNNs' loss landscape sharpness to optimize the poisoning effect on the worst re-trained model.
arXiv Detail & Related papers (2023-05-24T08:00:21Z) - Denoising Autoencoder-based Defensive Distillation as an Adversarial
Robustness Algorithm [0.0]
Adversarial attacks significantly threaten the robustness of deep neural networks (DNNs)
This work proposes a novel method that combines the defensive distillation mechanism with a denoising autoencoder (DAE)
arXiv Detail & Related papers (2023-03-28T11:34:54Z) - PiDAn: A Coherence Optimization Approach for Backdoor Attack Detection
and Mitigation in Deep Neural Networks [22.900501880865658]
Backdoor attacks impose a new threat in Deep Neural Networks (DNNs)
We propose PiDAn, an algorithm based on coherence optimization purifying the poisoned data.
Our PiDAn algorithm can detect more than 90% infected classes and identify 95% poisoned samples.
arXiv Detail & Related papers (2022-03-17T12:37:21Z) - Few-shot Backdoor Defense Using Shapley Estimation [123.56934991060788]
We develop a new approach called Shapley Pruning to mitigate backdoor attacks on deep neural networks.
ShapPruning identifies the few infected neurons (under 1% of all neurons) and manages to protect the model's structure and accuracy.
Experiments demonstrate the effectiveness and robustness of our method against various attacks and tasks.
arXiv Detail & Related papers (2021-12-30T02:27:03Z) - Accumulative Poisoning Attacks on Real-time Data [56.96241557830253]
We show that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
Our work validates that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
arXiv Detail & Related papers (2021-06-18T08:29:53Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.