Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
- URL: http://arxiv.org/abs/2402.12626v1
- Date: Tue, 20 Feb 2024 01:12:59 GMT
- Title: Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
- Authors: Yiwei Lu, Matthew Y.R. Yang, Gautam Kamath, Yaoliang Yu
- Abstract summary: In this paper, we explore the threat of indiscriminate attacks on downstream tasks that apply pre-trained feature extractors.
We propose two types of attacks: (1) the input space attacks, where we modify existing attacks to craft poisoned data in the input space; and (2) the feature targeted attacks, where we find poisoned features by treating the learned feature representations as a dataset.
Our experiments examine such attacks in popular downstream tasks of fine-tuning on the same dataset and transfer learning that considers domain adaptation.
- Score: 26.36344184385407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning models have achieved great success in supervised learning
tasks for end-to-end training, which requires a large amount of labeled data
that is not always feasible. Recently, many practitioners have shifted to
self-supervised learning methods that utilize cheap unlabeled data to learn a
general feature extractor via pre-training, which can be further applied to
personalized downstream tasks by simply training an additional linear layer
with limited labeled data. However, such a process may also raise concerns
regarding data poisoning attacks. For instance, indiscriminate data poisoning
attacks, which aim to decrease model utility by injecting a small number of
poisoned data into the training set, pose a security risk to machine learning
models, but have only been studied for end-to-end supervised learning. In this
paper, we extend the exploration of the threat of indiscriminate attacks on
downstream tasks that apply pre-trained feature extractors. Specifically, we
propose two types of attacks: (1) the input space attacks, where we modify
existing attacks to directly craft poisoned data in the input space. However,
due to the difficulty of optimization under constraints, we further propose (2)
the feature targeted attacks, where we mitigate the challenge with three
stages, firstly acquiring target parameters for the linear head; secondly
finding poisoned features by treating the learned feature representations as a
dataset; and thirdly inverting the poisoned features back to the input space.
Our experiments examine such attacks in popular downstream tasks of fine-tuning
on the same dataset and transfer learning that considers domain adaptation.
Empirical results reveal that transfer learning is more vulnerable to our
attacks. Additionally, input space attacks are a strong threat if no
countermeasures are posed, but are otherwise weaker than feature targeted
attacks.
Related papers
- Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks [11.390175856652856]
Clean-label attacks are a more stealthy form of backdoor attacks that can perform the attack without changing the labels of poisoned data.
We study different strategies for selectively poisoning a small set of training samples in the target class to boost the attack success rate.
Our threat model poses a serious threat in training machine learning models with third-party datasets.
arXiv Detail & Related papers (2024-07-15T15:38:21Z) - Transferable Availability Poisoning Attacks [23.241524904589326]
We consider availability data poisoning attacks, where an adversary aims to degrade the overall test accuracy of a machine learning model.
Existing poisoning strategies can achieve the attack goal but assume the victim to employ the same learning method as what the adversary uses to mount the attack.
We propose Transferable Poisoning, which first leverages the intrinsic characteristics of alignment and uniformity to enable better unlearnability.
arXiv Detail & Related papers (2023-10-08T12:22:50Z) - Learning to Unlearn: Instance-wise Unlearning for Pre-trained
Classifiers [71.70205894168039]
We consider instance-wise unlearning, of which the goal is to delete information on a set of instances from a pre-trained model.
We propose two methods that reduce forgetting on the remaining data: 1) utilizing adversarial examples to overcome forgetting at the representation-level and 2) leveraging weight importance metrics to pinpoint network parameters guilty of propagating unwanted information.
arXiv Detail & Related papers (2023-01-27T07:53:50Z) - Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning
Few-Shot Meta-Learners [28.468089304148453]
We attack amortized meta-learners, which allows us to craft colluding sets of inputs that fool the system's learning algorithm.
We show that in a white box setting, these attacks are very successful and can cause the target model's predictions to become worse than chance.
We explore two hypotheses to explain this: 'overfitting' by the attack, and mismatch between the model on which the attack is generated and that to which the attack is transferred.
arXiv Detail & Related papers (2022-11-23T14:55:44Z) - Amplifying Membership Exposure via Data Poisoning [18.799570863203858]
In this paper, we investigate the third type of exploitation of data poisoning - increasing the risks of privacy leakage of benign training samples.
We propose a set of data poisoning attacks to amplify the membership exposure of the targeted class.
Our results show that the proposed attacks can substantially increase the membership inference precision with minimum overall test-time model performance degradation.
arXiv Detail & Related papers (2022-11-01T13:52:25Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Indiscriminate Poisoning Attacks Are Shortcuts [77.38947817228656]
We find that the perturbations of advanced poisoning attacks are almost textbflinear separable when assigned with the target labels of the corresponding samples.
We show that such synthetic perturbations are as powerful as the deliberately crafted attacks.
Our finding suggests that the emphshortcut learning problem is more serious than previously believed.
arXiv Detail & Related papers (2021-11-01T12:44:26Z) - Accumulative Poisoning Attacks on Real-time Data [56.96241557830253]
We show that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
Our work validates that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
arXiv Detail & Related papers (2021-06-18T08:29:53Z) - Gradient-based Data Subversion Attack Against Binary Classifiers [9.414651358362391]
In this work, we focus on label contamination attack in which an attacker poisons the labels of data to compromise the functionality of the system.
We exploit the gradients of a differentiable convex loss function with respect to the predicted label as a warm-start and formulate different strategies to find a set of data instances to contaminate.
Our experiments show that the proposed approach outperforms the baselines and is computationally efficient.
arXiv Detail & Related papers (2021-05-31T09:04:32Z) - Learning and Certification under Instance-targeted Poisoning [49.55596073963654]
We study PAC learnability and certification under instance-targeted poisoning attacks.
We show that when the budget of the adversary scales sublinearly with the sample complexity, PAC learnability and certification are achievable.
We empirically study the robustness of K nearest neighbour, logistic regression, multi-layer perceptron, and convolutional neural network on real data sets.
arXiv Detail & Related papers (2021-05-18T17:48:15Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.