Rethinking Backdoor Data Poisoning Attacks in the Context of
Semi-Supervised Learning
- URL: http://arxiv.org/abs/2212.02582v1
- Date: Mon, 5 Dec 2022 20:21:31 GMT
- Title: Rethinking Backdoor Data Poisoning Attacks in the Context of
Semi-Supervised Learning
- Authors: Marissa Connor, Vincent Emanuele
- Abstract summary: Semi-supervised learning methods can train high-accuracy machine learning models with a fraction of the labeled training samples required for traditional supervised learning.
Such methods do not typically involve close review of the unlabeled training samples, making them tempting targets for data poisoning attacks.
We show that simple poisoning attacks that influence the distribution of the poisoned samples' predicted labels are highly effective.
- Score: 5.417264344115724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised learning methods can train high-accuracy machine learning
models with a fraction of the labeled training samples required for traditional
supervised learning. Such methods do not typically involve close review of the
unlabeled training samples, making them tempting targets for data poisoning
attacks. In this paper we investigate the vulnerabilities of semi-supervised
learning methods to backdoor data poisoning attacks on the unlabeled samples.
We show that simple poisoning attacks that influence the distribution of the
poisoned samples' predicted labels are highly effective - achieving an average
attack success rate as high as 96.9%. We introduce a generalized attack
framework targeting semi-supervised learning methods to better understand and
exploit their limitations and to motivate future defense strategies.
Related papers
- Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks [11.390175856652856]
Clean-label attacks are a more stealthy form of backdoor attacks that can perform the attack without changing the labels of poisoned data.
We study different strategies for selectively poisoning a small set of training samples in the target class to boost the attack success rate.
Our threat model poses a serious threat in training machine learning models with third-party datasets.
arXiv Detail & Related papers (2024-07-15T15:38:21Z) - SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - PACOL: Poisoning Attacks Against Continual Learners [1.569413950416037]
In this work, we demonstrate that continual learning systems can be manipulated by malicious misinformation.
We present a new category of data poisoning attacks specific for continual learners, which we refer to as em Poisoning Attacks Against Continual learners (PACOL)
A comprehensive set of experiments shows the vulnerability of commonly used generative replay and regularization-based continual learning approaches against attack methods.
arXiv Detail & Related papers (2023-11-18T00:20:57Z) - Transferable Availability Poisoning Attacks [23.241524904589326]
We consider availability data poisoning attacks, where an adversary aims to degrade the overall test accuracy of a machine learning model.
Existing poisoning strategies can achieve the attack goal but assume the victim to employ the same learning method as what the adversary uses to mount the attack.
We propose Transferable Poisoning, which first leverages the intrinsic characteristics of alignment and uniformity to enable better unlearnability.
arXiv Detail & Related papers (2023-10-08T12:22:50Z) - On Practical Aspects of Aggregation Defenses against Data Poisoning
Attacks [58.718697580177356]
Attacks on deep learning models with malicious training samples are known as data poisoning.
Recent advances in defense strategies against data poisoning have highlighted the effectiveness of aggregation schemes in achieving certified poisoning robustness.
Here we focus on Deep Partition Aggregation, a representative aggregation defense, and assess its practical aspects, including efficiency, performance, and robustness.
arXiv Detail & Related papers (2023-06-28T17:59:35Z) - Amplifying Membership Exposure via Data Poisoning [18.799570863203858]
In this paper, we investigate the third type of exploitation of data poisoning - increasing the risks of privacy leakage of benign training samples.
We propose a set of data poisoning attacks to amplify the membership exposure of the targeted class.
Our results show that the proposed attacks can substantially increase the membership inference precision with minimum overall test-time model performance degradation.
arXiv Detail & Related papers (2022-11-01T13:52:25Z) - Accumulative Poisoning Attacks on Real-time Data [56.96241557830253]
We show that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
Our work validates that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
arXiv Detail & Related papers (2021-06-18T08:29:53Z) - Learning and Certification under Instance-targeted Poisoning [49.55596073963654]
We study PAC learnability and certification under instance-targeted poisoning attacks.
We show that when the budget of the adversary scales sublinearly with the sample complexity, PAC learnability and certification are achievable.
We empirically study the robustness of K nearest neighbour, logistic regression, multi-layer perceptron, and convolutional neural network on real data sets.
arXiv Detail & Related papers (2021-05-18T17:48:15Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.