Transfer Attacks and Defenses for Large Language Models on Coding Tasks
- URL: http://arxiv.org/abs/2311.13445v1
- Date: Wed, 22 Nov 2023 15:11:35 GMT
- Title: Transfer Attacks and Defenses for Large Language Models on Coding Tasks
- Authors: Chi Zhang, Zifan Wang, Ravi Mangal, Matt Fredrikson, Limin Jia, Corina
Pasareanu
- Abstract summary: We study the effect of adversarial perturbations on coding tasks with large language models (LLMs)
We propose prompt-based defenses that involve modifying the prompt to include examples of adversarially perturbed code and explicit instructions for reversing adversarial perturbations.
Our experiments show that adversarial examples obtained with a smaller code model are indeed transferable, weakening the LLMs' performance.
- Score: 30.065641782962974
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern large language models (LLMs), such as ChatGPT, have demonstrated
impressive capabilities for coding tasks including writing and reasoning about
code. They improve upon previous neural network models of code, such as
code2seq or seq2seq, that already demonstrated competitive results when
performing tasks such as code summarization and identifying code
vulnerabilities. However, these previous code models were shown vulnerable to
adversarial examples, i.e. small syntactic perturbations that do not change the
program's semantics, such as the inclusion of "dead code" through false
conditions or the addition of inconsequential print statements, designed to
"fool" the models. LLMs can also be vulnerable to the same adversarial
perturbations but a detailed study on this concern has been lacking so far. In
this paper we aim to investigate the effect of adversarial perturbations on
coding tasks with LLMs. In particular, we study the transferability of
adversarial examples, generated through white-box attacks on smaller code
models, to LLMs. Furthermore, to make the LLMs more robust against such
adversaries without incurring the cost of retraining, we propose prompt-based
defenses that involve modifying the prompt to include additional information
such as examples of adversarially perturbed code and explicit instructions for
reversing adversarial perturbations. Our experiments show that adversarial
examples obtained with a smaller code model are indeed transferable, weakening
the LLMs' performance. The proposed defenses show promise in improving the
model's resilience, paving the way to more robust defensive solutions for LLMs
in code-related applications.
Related papers
- What You See Is Not Always What You Get: An Empirical Study of Code Comprehension by Large Language Models [0.5735035463793009]
We investigate the vulnerability of large language models (LLMs) to imperceptible attacks, where hidden character manipulation in source code misleads LLMs' behaviour while remaining undetectable to human reviewers.
These attacks include coding reordering, invisible coding characters, code deletions, and code homoglyphs.
Our findings confirm the susceptibility of LLMs to imperceptible coding character attacks, while different LLMs present different negative correlations between perturbation magnitude and performance.
arXiv Detail & Related papers (2024-12-11T04:52:41Z) - Case2Code: Scalable Synthetic Data for Code Generation [105.89741089673575]
Large Language Models (LLMs) have shown outstanding breakthroughs in code generation.
Recent work improves code LLMs by training on synthetic data generated by some powerful LLMs.
We propose a textbfCase2Code task by exploiting the expressiveness and correctness of programs.
arXiv Detail & Related papers (2024-07-17T11:35:00Z) - TPIA: Towards Target-specific Prompt Injection Attack against Code-oriented Large Language Models [28.827640446926253]
This paper presents a novel attack paradigm against Code LLMs, namely target-specific prompt injection attack (TPIA)
TPIA generates non-functional perturbations containing the information of malicious instructions and inserts them into the victim's code context.
We show that our TPIA can successfully attack three representative open-source Code LLMs and two mainstream commercial Code LLM-integrated applications.
arXiv Detail & Related papers (2024-07-12T10:59:32Z) - An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection [17.948513691133037]
We introduce CodeBreaker, a pioneering LLM-assisted backdoor attack framework on code completion models.
By integrating malicious payloads directly into the source code with minimal transformation, CodeBreaker challenges current security measures.
arXiv Detail & Related papers (2024-06-10T22:10:05Z) - Assessing Cybersecurity Vulnerabilities in Code Large Language Models [18.720986922660543]
EvilInstructCoder is a framework designed to assess the cybersecurity vulnerabilities of instruction-tuned Code LLMs to adversarial attacks.
It incorporates practical threat models to reflect real-world adversaries with varying capabilities.
We conduct a comprehensive investigation into the exploitability of instruction tuning for coding tasks using three state-of-the-art Code LLM models.
arXiv Detail & Related papers (2024-04-29T10:14:58Z) - AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting [54.931241667414184]
We propose textbfAdaptive textbfShield Prompting, which prepends inputs with defense prompts to defend MLLMs against structure-based jailbreak attacks.
Our methods can consistently improve MLLMs' robustness against structure-based jailbreak attacks.
arXiv Detail & Related papers (2024-03-14T15:57:13Z) - Coercing LLMs to do and reveal (almost) anything [80.8601180293558]
It has been shown that adversarial attacks on large language models (LLMs) can "jailbreak" the model into making harmful statements.
We argue that the spectrum of adversarial attacks on LLMs is much larger than merely jailbreaking.
arXiv Detail & Related papers (2024-02-21T18:59:13Z) - SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks [99.23352758320945]
We propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks on large language models (LLMs)
Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense first randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs.
arXiv Detail & Related papers (2023-10-05T17:01:53Z) - Universal and Transferable Adversarial Attacks on Aligned Language
Models [118.41733208825278]
We propose a simple and effective attack method that causes aligned language models to generate objectionable behaviors.
Surprisingly, we find that the adversarial prompts generated by our approach are quite transferable.
arXiv Detail & Related papers (2023-07-27T17:49:12Z) - On Extracting Specialized Code Abilities from Large Language Models: A
Feasibility Study [22.265542509143756]
We investigate the feasibility of launching imitation attacks on large language models (LLMs)
We show that attackers can train a medium-sized backbone model to replicate specialized code behaviors similar to the target LLMs.
arXiv Detail & Related papers (2023-03-06T10:34:41Z) - Semantic-Preserving Adversarial Code Comprehension [75.76118224437974]
We propose Semantic-Preserving Adversarial Code Embeddings (SPACE) to find the worst-case semantic-preserving attacks.
Experiments and analysis demonstrate that SPACE can stay robust against state-of-the-art attacks while boosting the performance of PrLMs for code.
arXiv Detail & Related papers (2022-09-12T10:32:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.