TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
- URL: http://arxiv.org/abs/2301.02344v2
- Date: Wed, 24 Jan 2024 17:49:12 GMT
- Title: TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
- Authors: Hojjat Aghakhani, Wei Dai, Andre Manoel, Xavier Fernandes, Anant
Kharkar, Christopher Kruegel, Giovanni Vigna, David Evans, Ben Zorn, and
Robert Sim
- Abstract summary: We show two attacks that can bypass static analysis by planting malicious poison data in out-of-context regions such as docstrings.
Our most novel attack, TROJANPUZZLE, goes one step further in generating less suspicious poison data by never explicitly including certain (suspicious) parts of the payload in the poison data.
- Score: 27.418320728203387
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With tools like GitHub Copilot, automatic code suggestion is no longer a
dream in software engineering. These tools, based on large language models, are
typically trained on massive corpora of code mined from unvetted public
sources. As a result, these models are susceptible to data poisoning attacks
where an adversary manipulates the model's training by injecting malicious
data. Poisoning attacks could be designed to influence the model's suggestions
at run time for chosen contexts, such as inducing the model into suggesting
insecure code payloads. To achieve this, prior attacks explicitly inject the
insecure code payload into the training data, making the poison data detectable
by static analysis tools that can remove such malicious data from the training
set. In this work, we demonstrate two novel attacks, COVERT and TROJANPUZZLE,
that can bypass static analysis by planting malicious poison data in
out-of-context regions such as docstrings. Our most novel attack, TROJANPUZZLE,
goes one step further in generating less suspicious poison data by never
explicitly including certain (suspicious) parts of the payload in the poison
data, while still inducing a model that suggests the entire payload when
completing code (i.e., outside docstrings). This makes TROJANPUZZLE robust
against signature-based dataset-cleansing methods that can filter out
suspicious sequences from the training data. Our evaluation against models of
two sizes demonstrates that both COVERT and TROJANPUZZLE have significant
implications for practitioners when selecting code used to train or tune
code-suggestion models.
Related papers
- Can We Trust the Unlabeled Target Data? Towards Backdoor Attack and Defense on Model Adaptation [120.42853706967188]
We explore the potential backdoor attacks on model adaptation launched by well-designed poisoning target data.
We propose a plug-and-play method named MixAdapt, combining it with existing adaptation algorithms.
arXiv Detail & Related papers (2024-01-11T16:42:10Z) - Occlusion-based Detection of Trojan-triggering Inputs in Large Language
Models of Code [12.590783740412157]
Large language models (LLMs) are becoming an integrated part of software development.
A potential attack surface can be to inject poisonous data into the training data to make models vulnerable, aka trojaned.
It can pose a significant threat by hiding manipulative behaviors inside models, leading to compromising the integrity of the models in downstream tasks.
arXiv Detail & Related papers (2023-12-07T02:44:35Z) - On the Exploitability of Instruction Tuning [103.8077787502381]
In this work, we investigate how an adversary can exploit instruction tuning to change a model's behavior.
We propose textitAutoPoison, an automated data poisoning pipeline.
Our results show that AutoPoison allows an adversary to change a model's behavior by poisoning only a small fraction of data.
arXiv Detail & Related papers (2023-06-28T17:54:04Z) - SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics [44.487762480349765]
A small fraction of poisoned data changes the behavior of a trained model when triggered by an attacker-specified watermark.
We propose a novel defense algorithm using robust covariance estimation to amplify the spectral signature of corrupted data.
arXiv Detail & Related papers (2021-04-22T20:49:40Z) - Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability
of the Embedding Layers in NLP Models [27.100909068228813]
Recent studies have revealed a security threat to natural language processing (NLP) models, called the Backdoor Attack.
In this paper, we find that it is possible to hack the model in a data-free way by modifying one single word embedding vector.
Experimental results on sentiment analysis and sentence-pair classification tasks show that our method is more efficient and stealthier.
arXiv Detail & Related papers (2021-03-29T12:19:45Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z) - Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
Data Poisoning Attacks [74.88735178536159]
Data poisoning is the number one concern among threats ranging from model stealing to adversarial attacks.
We observe that data poisoning and backdoor attacks are highly sensitive to variations in the testing setup.
We apply rigorous tests to determine the extent to which we should fear them.
arXiv Detail & Related papers (2020-06-22T18:34:08Z) - Weight Poisoning Attacks on Pre-trained Models [103.19413805873585]
We show that it is possible to construct weight poisoning'' attacks where pre-trained weights are injected with vulnerabilities that expose backdoors'' after fine-tuning.
Our experiments on sentiment classification, toxicity detection, and spam detection show that this attack is widely applicable and poses a serious threat.
arXiv Detail & Related papers (2020-04-14T16:51:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.