SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks
- URL: http://arxiv.org/abs/2310.03684v3
- Date: Wed, 29 Nov 2023 14:39:37 GMT
- Title: SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks
- Authors: Alexander Robey and Eric Wong and Hamed Hassani and George J. Pappas
- Abstract summary: We propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks on large language models (LLMs)
Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense first randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs.
- Score: 99.23352758320945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite efforts to align large language models (LLMs) with human values,
widely-used LLMs such as GPT, Llama, Claude, and PaLM are susceptible to
jailbreaking attacks, wherein an adversary fools a targeted LLM into generating
objectionable content. To address this vulnerability, we propose SmoothLLM, the
first algorithm designed to mitigate jailbreaking attacks on LLMs. Based on our
finding that adversarially-generated prompts are brittle to character-level
changes, our defense first randomly perturbs multiple copies of a given input
prompt, and then aggregates the corresponding predictions to detect adversarial
inputs. SmoothLLM reduces the attack success rate on numerous popular LLMs to
below one percentage point, avoids unnecessary conservatism, and admits
provable guarantees on attack mitigation. Moreover, our defense uses
exponentially fewer queries than existing attacks and is compatible with any
LLM. Our code is publicly available at the following link:
https://github.com/arobey1/smooth-llm.
Related papers
- Defending Large Language Models Against Jailbreak Attacks via Layer-specific Editing [14.094372002702476]
Large language models (LLMs) are increasingly being adopted in a wide range of real-world applications.
Recent studies have shown that LLMs are vulnerable to deliberately crafted adversarial prompts.
We propose a novel defense method termed textbfLayer-specific textbfEditing (LED) to enhance the resilience of LLMs against jailbreak attacks.
arXiv Detail & Related papers (2024-05-28T13:26:12Z) - AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting [54.931241667414184]
We propose textbfAdaptive textbfShield Prompting, which prepends inputs with defense prompts to defend MLLMs against structure-based jailbreak attacks.
Our methods can consistently improve MLLMs' robustness against structure-based jailbreak attacks.
arXiv Detail & Related papers (2024-03-14T15:57:13Z) - ASETF: A Novel Method for Jailbreak Attack on LLMs through Translate Suffix Embeddings [58.82536530615557]
We propose an Adversarial Suffix Embedding Translation Framework (ASETF) to transform continuous adversarial suffix embeddings into coherent and understandable text.
Our method significantly reduces the computation time of adversarial suffixes and achieves a much better attack success rate to existing techniques.
arXiv Detail & Related papers (2024-02-25T06:46:27Z) - Coercing LLMs to do and reveal (almost) anything [80.8601180293558]
It has been shown that adversarial attacks on large language models (LLMs) can "jailbreak" the model into making harmful statements.
We argue that the spectrum of adversarial attacks on LLMs is much larger than merely jailbreaking.
arXiv Detail & Related papers (2024-02-21T18:59:13Z) - Round Trip Translation Defence against Large Language Model Jailbreaking
Attacks [12.664577378692703]
We propose the Round Trip Translation (RTT) method to defend against social-engineered attacks on large language models (LLMs)
RTT paraphrases the adversarial prompt and generalizes the idea conveyed, making it easier for LLMs to detect induced harmful behavior.
We are the first to attempt mitigating the MathsAttack and reduced its attack success rate by almost 40%.
arXiv Detail & Related papers (2024-02-21T03:59:52Z) - Instruction Backdoor Attacks Against Customized LLMs [37.92008159382539]
We propose the first instruction backdoor attacks against applications integrated with untrusted customized LLMs.
Our attack includes 3 levels of attacks: word-level, syntax-level, and semantic-level, which adopt different types of triggers with progressive stealthiness.
We propose two defense strategies and demonstrate their effectiveness in reducing such attacks.
arXiv Detail & Related papers (2024-02-14T13:47:35Z) - Weak-to-Strong Jailbreaking on Large Language Models [96.50953637783581]
Large language models (LLMs) are vulnerable to jailbreak attacks.
Existing jailbreaking methods are computationally costly.
We propose the weak-to-strong jailbreaking attack.
arXiv Detail & Related papers (2024-01-30T18:48:37Z) - A Wolf in Sheep's Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily [51.63085197162279]
Large Language Models (LLMs) are designed to provide useful and safe responses.
adversarial prompts known as 'jailbreaks' can circumvent safeguards.
We propose ReNeLLM, an automatic framework that leverages LLMs themselves to generate effective jailbreak prompts.
arXiv Detail & Related papers (2023-11-14T16:02:16Z) - Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM [23.16217797677075]
We introduce a Robustly Aligned LLM (RA-LLM) to defend against potential alignment-breaking attacks.
RA-LLM can successfully defend against both state-of-the-art adversarial prompts and popular handcrafted jailbreaking prompts.
arXiv Detail & Related papers (2023-09-18T02:07:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.