Comprehensive Assessment of Jailbreak Attacks Against LLMs
- URL: http://arxiv.org/abs/2402.05668v1
- Date: Thu, 8 Feb 2024 13:42:50 GMT
- Title: Comprehensive Assessment of Jailbreak Attacks Against LLMs
- Authors: Junjie Chu and Yugeng Liu and Ziqing Yang and Xinyue Shen and Michael
Backes and Yang Zhang
- Abstract summary: We study 13 cutting-edge jailbreak methods from four categories, 160 questions from 16 violation categories, and six popular LLMs.
Our experimental results demonstrate that the optimized jailbreak prompts consistently achieve the highest attack success rates.
We discuss the trade-off between the attack performance and efficiency, as well as show that the transferability of the jailbreak prompts is still viable.
- Score: 28.58973312098698
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Misuse of the Large Language Models (LLMs) has raised widespread concern. To
address this issue, safeguards have been taken to ensure that LLMs align with
social ethics. However, recent findings have revealed an unsettling
vulnerability bypassing the safeguards of LLMs, known as jailbreak attacks. By
applying techniques, such as employing role-playing scenarios, adversarial
examples, or subtle subversion of safety objectives as a prompt, LLMs can
produce an inappropriate or even harmful response. While researchers have
studied several categories of jailbreak attacks, they have done so in
isolation. To fill this gap, we present the first large-scale measurement of
various jailbreak attack methods. We concentrate on 13 cutting-edge jailbreak
methods from four categories, 160 questions from 16 violation categories, and
six popular LLMs. Our extensive experimental results demonstrate that the
optimized jailbreak prompts consistently achieve the highest attack success
rates, as well as exhibit robustness across different LLMs. Some jailbreak
prompt datasets, available from the Internet, can also achieve high attack
success rates on many LLMs, such as ChatGLM3, GPT-3.5, and PaLM2. Despite the
claims from many organizations regarding the coverage of violation categories
in their policies, the attack success rates from these categories remain high,
indicating the challenges of effectively aligning LLM policies and the ability
to counter jailbreak attacks. We also discuss the trade-off between the attack
performance and efficiency, as well as show that the transferability of the
jailbreak prompts is still viable, becoming an option for black-box models.
Overall, our research highlights the necessity of evaluating different
jailbreak methods. We hope our study can provide insights for future research
on jailbreak attacks and serve as a benchmark tool for evaluating them for
practitioners.
Related papers
- SQL Injection Jailbreak: a structural disaster of large language models [71.55108680517422]
We propose a novel jailbreak method, which utilizes the construction of input prompts by LLMs to inject jailbreak information into user prompts.
Our SIJ method achieves nearly 100% attack success rates on five well-known open-source LLMs in the context of AdvBench.
arXiv Detail & Related papers (2024-11-03T13:36:34Z) - Transferable Ensemble Black-box Jailbreak Attacks on Large Language Models [0.0]
We propose a novel black-box jailbreak attacking framework that incorporates various LLM-as-Attacker methods.
Our method is designed based on three key observations from existing jailbreaking studies and practices.
arXiv Detail & Related papers (2024-10-31T01:55:33Z) - EnJa: Ensemble Jailbreak on Large Language Models [69.13666224876408]
Large Language Models (LLMs) are increasingly being deployed in safety-critical applications.
LLMs can still be jailbroken by carefully crafted malicious prompts, producing content that violates policy regulations.
We propose a novel EnJa attack to hide harmful instructions using prompt-level jailbreak, boost the attack success rate using a gradient-based attack, and connect the two types of jailbreak attacks via a template-based connector.
arXiv Detail & Related papers (2024-08-07T07:46:08Z) - Figure it Out: Analyzing-based Jailbreak Attack on Large Language Models [21.252514293436437]
We propose Analyzing-based Jailbreak (ABJ) to combat jailbreak attacks on Large Language Models (LLMs)
ABJ achieves 94.8% attack success rate (ASR) and 1.06 attack efficiency (AE) on GPT-4-turbo-0409, demonstrating state-of-the-art attack effectiveness and efficiency.
arXiv Detail & Related papers (2024-07-23T06:14:41Z) - Virtual Context: Enhancing Jailbreak Attacks with Special Token Injection [54.05862550647966]
This paper introduces Virtual Context, which leverages special tokens, previously overlooked in LLM security, to improve jailbreak attacks.
Comprehensive evaluations show that Virtual Context-assisted jailbreak attacks can improve the success rates of four widely used jailbreak methods by approximately 40%.
arXiv Detail & Related papers (2024-06-28T11:35:54Z) - Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs [13.317364896194903]
Large Language Models (LLMs) have demonstrated significant capabilities in executing complex tasks in a zero-shot manner.
LLMs are susceptible to jailbreak attacks and can be manipulated to produce harmful outputs.
arXiv Detail & Related papers (2024-06-13T17:01:40Z) - EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models [53.87416566981008]
This paper introduces EasyJailbreak, a unified framework simplifying the construction and evaluation of jailbreak attacks against Large Language Models (LLMs)
It builds jailbreak attacks using four components: Selector, Mutator, Constraint, and Evaluator.
Our validation across 10 distinct LLMs reveals a significant vulnerability, with an average breach probability of 60% under various jailbreaking attacks.
arXiv Detail & Related papers (2024-03-18T18:39:53Z) - A Wolf in Sheep's Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily [51.63085197162279]
Large Language Models (LLMs) are designed to provide useful and safe responses.
adversarial prompts known as 'jailbreaks' can circumvent safeguards.
We propose ReNeLLM, an automatic framework that leverages LLMs themselves to generate effective jailbreak prompts.
arXiv Detail & Related papers (2023-11-14T16:02:16Z) - Jailbreaking Black Box Large Language Models in Twenty Queries [97.29563503097995]
Large language models (LLMs) are vulnerable to adversarial jailbreaks.
We propose an algorithm that generates semantic jailbreaks with only black-box access to an LLM.
arXiv Detail & Related papers (2023-10-12T15:38:28Z) - Tricking LLMs into Disobedience: Formalizing, Analyzing, and Detecting Jailbreaks [12.540530764250812]
We propose a formalism and a taxonomy of known (and possible) jailbreaks.
We release a dataset of model outputs across 3700 jailbreak prompts over 4 tasks.
arXiv Detail & Related papers (2023-05-24T09:57:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.