Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation
- URL: http://arxiv.org/abs/2310.06987v1
- Date: Tue, 10 Oct 2023 20:15:54 GMT
- Title: Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation
- Authors: Yangsibo Huang, Samyak Gupta, Mengzhou Xia, Kai Li, Danqi Chen
- Abstract summary: Even carefully aligned models can be manipulated maliciously, leading to unintended behaviors, known as "jailbreaks"
We propose the generation exploitation attack, which disrupts model alignment by only manipulating variations of decoding methods.
Our study underscores a major failure in current safety evaluation and alignment procedures for open-source LLMs.
- Score: 39.829517061574364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid progress in open-source large language models (LLMs) is
significantly advancing AI development. Extensive efforts have been made before
model release to align their behavior with human values, with the primary goal
of ensuring their helpfulness and harmlessness. However, even carefully aligned
models can be manipulated maliciously, leading to unintended behaviors, known
as "jailbreaks". These jailbreaks are typically triggered by specific text
inputs, often referred to as adversarial prompts. In this work, we propose the
generation exploitation attack, an extremely simple approach that disrupts
model alignment by only manipulating variations of decoding methods. By
exploiting different generation strategies, including varying decoding
hyper-parameters and sampling methods, we increase the misalignment rate from
0% to more than 95% across 11 language models including LLaMA2, Vicuna, Falcon,
and MPT families, outperforming state-of-the-art attacks with $30\times$ lower
computational cost. Finally, we propose an effective alignment method that
explores diverse generation strategies, which can reasonably reduce the
misalignment rate under our attack. Altogether, our study underscores a major
failure in current safety evaluation and alignment procedures for open-source
LLMs, strongly advocating for more comprehensive red teaming and better
alignment before releasing such models. Our code is available at
https://github.com/Princeton-SysML/Jailbreak_LLM.
Related papers
- Poisoned LangChain: Jailbreak LLMs by LangChain [9.658883589561915]
We propose the concept of indirect jailbreak and achieve Retrieval-Augmented Generation via LangChain.
We tested this method on six different large language models across three major categories of jailbreak issues.
arXiv Detail & Related papers (2024-06-26T07:21:02Z) - AutoJailbreak: Exploring Jailbreak Attacks and Defenses through a Dependency Lens [83.08119913279488]
We present a systematic analysis of the dependency relationships in jailbreak attack and defense techniques.
We propose three comprehensive, automated, and logical frameworks.
We show that the proposed ensemble jailbreak attack and defense framework significantly outperforms existing research.
arXiv Detail & Related papers (2024-06-06T07:24:41Z) - Improved Techniques for Optimization-Based Jailbreaking on Large Language Models [78.32176751215073]
Greedy Coordinate Gradient (GCG) attack's success has led to a growing interest in the study of optimization-based jailbreaking techniques.
We present several improved (empirical) techniques for optimization-based jailbreaks like GCG.
The results demonstrate that our improved techniques can help GCG outperform state-of-the-art jailbreaking attacks and achieve nearly 100% attack success rate.
arXiv Detail & Related papers (2024-05-31T17:07:15Z) - Weak-to-Strong Jailbreaking on Large Language Models [96.50953637783581]
Large language models (LLMs) are vulnerable to jailbreak attacks.
Existing jailbreaking methods are computationally costly.
We propose the weak-to-strong jailbreaking attack.
arXiv Detail & Related papers (2024-01-30T18:48:37Z) - SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks [99.23352758320945]
We propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks on large language models (LLMs)
Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense first randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs.
arXiv Detail & Related papers (2023-10-05T17:01:53Z) - AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models [54.95912006700379]
We introduce AutoDAN, a novel jailbreak attack against aligned Large Language Models.
AutoDAN can automatically generate stealthy jailbreak prompts by the carefully designed hierarchical genetic algorithm.
arXiv Detail & Related papers (2023-10-03T19:44:37Z) - Open Sesame! Universal Black Box Jailbreaking of Large Language Models [0.0]
Large language models (LLMs) are designed to provide helpful and safe responses.
LLMs often rely on alignment techniques to align with user intent and social guidelines.
We introduce a novel approach that employs a genetic algorithm (GA) to manipulate LLMs when model architecture and parameters are inaccessible.
arXiv Detail & Related papers (2023-09-04T08:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.