Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning
Attacks
- URL: http://arxiv.org/abs/2303.03592v3
- Date: Tue, 6 Jun 2023 04:39:06 GMT
- Title: Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning
Attacks
- Authors: Yiwei Lu, Gautam Kamath, Yaoliang Yu
- Abstract summary: We introduce the notion of model poisoning reachability as a technical tool to explore the intrinsic limits of data poisoning attacks towards target parameters.
We derive an easily computable threshold to establish and quantify a surprising phase transition phenomenon among popular ML models.
Our work highlights the critical role played by the poisoning ratio, and sheds new insights on existing empirical results, attacks and mitigation strategies in data poisoning.
- Score: 31.339252233416477
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Indiscriminate data poisoning attacks aim to decrease a model's test accuracy
by injecting a small amount of corrupted training data. Despite significant
interest, existing attacks remain relatively ineffective against modern machine
learning (ML) architectures. In this work, we introduce the notion of model
poisoning reachability as a technical tool to explore the intrinsic limits of
data poisoning attacks towards target parameters (i.e., model-targeted
attacks). We derive an easily computable threshold to establish and quantify a
surprising phase transition phenomenon among popular ML models: data poisoning
attacks can achieve certain target parameters only when the poisoning ratio
exceeds our threshold. Building on existing parameter corruption attacks and
refining the Gradient Canceling attack, we perform extensive experiments to
confirm our theoretical findings, test the predictability of our transition
threshold, and significantly improve existing indiscriminate data poisoning
baselines over a range of datasets and models. Our work highlights the critical
role played by the poisoning ratio, and sheds new insights on existing
empirical results, attacks and mitigation strategies in data poisoning.
Related papers
- Unlearnable Examples Detection via Iterative Filtering [84.59070204221366]
Deep neural networks are proven to be vulnerable to data poisoning attacks.
It is quite beneficial and challenging to detect poisoned samples from a mixed dataset.
We propose an Iterative Filtering approach for UEs identification.
arXiv Detail & Related papers (2024-08-15T13:26:13Z) - Have You Poisoned My Data? Defending Neural Networks against Data Poisoning [0.393259574660092]
We propose a novel approach to detect and filter poisoned datapoints in the transfer learning setting.
We show that effective poisons can be successfully differentiated from clean points in the characteristic vector space.
Our evaluation shows that our proposal outperforms existing approaches in defense rate and final trained model performance.
arXiv Detail & Related papers (2024-03-20T11:50:16Z) - What Distributions are Robust to Indiscriminate Poisoning Attacks for
Linear Learners? [15.848311379119295]
We study indiscriminate poisoning for linear learners where an adversary injects a few crafted examples into the training data with the goal of forcing the induced model to incur higher test error.
Inspired by the observation that linear learners on some datasets are able to resist the best known attacks even without any defenses, we investigate whether datasets can be inherently robust to indiscriminate poisoning attacks for linear learners.
arXiv Detail & Related papers (2023-07-03T14:54:13Z) - On Practical Aspects of Aggregation Defenses against Data Poisoning
Attacks [58.718697580177356]
Attacks on deep learning models with malicious training samples are known as data poisoning.
Recent advances in defense strategies against data poisoning have highlighted the effectiveness of aggregation schemes in achieving certified poisoning robustness.
Here we focus on Deep Partition Aggregation, a representative aggregation defense, and assess its practical aspects, including efficiency, performance, and robustness.
arXiv Detail & Related papers (2023-06-28T17:59:35Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - Temporal Robustness against Data Poisoning [69.01705108817785]
Data poisoning considers cases when an adversary manipulates the behavior of machine learning algorithms through malicious training data.
We propose a temporal threat model of data poisoning with two novel metrics, earliness and duration, which respectively measure how long an attack started in advance and how long an attack lasted.
arXiv Detail & Related papers (2023-02-07T18:59:19Z) - Analysis and Detectability of Offline Data Poisoning Attacks on Linear
Dynamical Systems [0.30458514384586405]
We study how poisoning impacts the least-squares estimate through the lens of statistical testing.
We propose a stealthy data poisoning attack on the least-squares estimator that can escape classical statistical tests.
arXiv Detail & Related papers (2022-11-16T10:01:03Z) - Accumulative Poisoning Attacks on Real-time Data [56.96241557830253]
We show that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
Our work validates that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
arXiv Detail & Related papers (2021-06-18T08:29:53Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Model-Targeted Poisoning Attacks with Provable Convergence [19.196295769662186]
In a poisoning attack, an adversary with control over a small fraction of the training data attempts to select that data in a way that induces a corrupted model.
We consider poisoning attacks against convex machine learning models and propose an efficient poisoning attack designed to induce a specified model.
arXiv Detail & Related papers (2020-06-30T01:56:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.