PRP: Propagating Universal Perturbations to Attack Large Language Model
Guard-Rails
- URL: http://arxiv.org/abs/2402.15911v1
- Date: Sat, 24 Feb 2024 21:27:13 GMT
- Title: PRP: Propagating Universal Perturbations to Attack Large Language Model
Guard-Rails
- Authors: Neal Mangaokar, Ashish Hooda, Jihye Choi, Shreyas Chandrashekaran,
Kassem Fawaz, Somesh Jha, Atul Prakash
- Abstract summary: Large language models (LLMs) are typically aligned to be harmless to humans.
Recent work has shown that such models are susceptible to automated jailbreak attacks that induce them to generate harmful content.
Our key contribution is to show a novel attack strategy, PRP, that is successful against several open-source (e.g., Llama 2) and closed-source (e.g., GPT 3.5) implementations of Guard Models.
- Score: 26.090757124460552
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) are typically aligned to be harmless to humans.
Unfortunately, recent work has shown that such models are susceptible to
automated jailbreak attacks that induce them to generate harmful content. More
recent LLMs often incorporate an additional layer of defense, a Guard Model,
which is a second LLM that is designed to check and moderate the output
response of the primary LLM. Our key contribution is to show a novel attack
strategy, PRP, that is successful against several open-source (e.g., Llama 2)
and closed-source (e.g., GPT 3.5) implementations of Guard Models. PRP
leverages a two step prefix-based attack that operates by (a) constructing a
universal adversarial prefix for the Guard Model, and (b) propagating this
prefix to the response. We find that this procedure is effective across
multiple threat models, including ones in which the adversary has no access to
the Guard Model at all. Our work suggests that further advances are required on
defenses and Guard Models before they can be considered effective.
Related papers
- Securing Multi-turn Conversational Language Models Against Distributed Backdoor Triggers [29.554818890832887]
Multi-turn conversational large language models (LLMs) are vulnerable to data poisoning backdoor attacks.
LLMs are at the risk of even more harmful and stealthy backdoor attacks where the backdoor triggers may span across multiple utterances.
We propose a new defense strategy for the multi-turn dialogue setting that is more challenging.
arXiv Detail & Related papers (2024-07-04T20:57:06Z) - SelfDefend: LLMs Can Defend Themselves against Jailbreaking in a Practical Manner [21.414701448926614]
This paper introduces a generic LLM jailbreak defense framework called SelfDefend.
We show that SelfDefend enables GPT-3.5 to suppress the attack success rate (ASR) by 8.97-95.74%.
We also empirically show that the tuned models are robust to targeted GCG and prompt injection attacks.
arXiv Detail & Related papers (2024-06-08T15:45:31Z) - Defensive Prompt Patch: A Robust and Interpretable Defense of LLMs against Jailbreak Attacks [59.46556573924901]
This paper introduces Defensive Prompt Patch (DPP), a novel prompt-based defense mechanism for large language models (LLMs)
Unlike previous approaches, DPP is designed to achieve a minimal Attack Success Rate (ASR) while preserving the high utility of LLMs.
Empirical results conducted on LLAMA-2-7B-Chat and Mistral-7B-Instruct-v0.2 models demonstrate the robustness and adaptability of DPP.
arXiv Detail & Related papers (2024-05-30T14:40:35Z) - Learning diverse attacks on large language models for robust red-teaming and safety tuning [126.32539952157083]
Red-teaming, or identifying prompts that elicit harmful responses, is a critical step in ensuring the safe deployment of large language models.
We show that even with explicit regularization to favor novelty and diversity, existing approaches suffer from mode collapse or fail to generate effective attacks.
We propose to use GFlowNet fine-tuning, followed by a secondary smoothing phase, to train the attacker model to generate diverse and effective attack prompts.
arXiv Detail & Related papers (2024-05-28T19:16:17Z) - TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation Models [69.37990698561299]
TrojFM is a novel backdoor attack tailored for very large foundation models.
Our approach injects backdoors by fine-tuning only a very small proportion of model parameters.
We demonstrate that TrojFM can launch effective backdoor attacks against widely used large GPT-style models.
arXiv Detail & Related papers (2024-05-27T03:10:57Z) - SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks [99.23352758320945]
We propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks on large language models (LLMs)
Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense first randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs.
arXiv Detail & Related papers (2023-10-05T17:01:53Z) - Baseline Defenses for Adversarial Attacks Against Aligned Language
Models [109.75753454188705]
Recent work shows that text moderations can produce jailbreaking prompts that bypass defenses.
We look at three types of defenses: detection (perplexity based), input preprocessing (paraphrase and retokenization), and adversarial training.
We find that the weakness of existing discretes for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs.
arXiv Detail & Related papers (2023-09-01T17:59:44Z) - Universal and Transferable Adversarial Attacks on Aligned Language
Models [118.41733208825278]
We propose a simple and effective attack method that causes aligned language models to generate objectionable behaviors.
Surprisingly, we find that the adversarial prompts generated by our approach are quite transferable.
arXiv Detail & Related papers (2023-07-27T17:49:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.