Comprehensive Assessment of Jailbreak Attacks Against LLMs
- URL: http://arxiv.org/abs/2402.05668v1
- Date: Thu, 8 Feb 2024 13:42:50 GMT
- Title: Comprehensive Assessment of Jailbreak Attacks Against LLMs
- Authors: Junjie Chu and Yugeng Liu and Ziqing Yang and Xinyue Shen and Michael
Backes and Yang Zhang
- Abstract summary: We study 13 cutting-edge jailbreak methods from four categories, 160 questions from 16 violation categories, and six popular LLMs.
Our experimental results demonstrate that the optimized jailbreak prompts consistently achieve the highest attack success rates.
We discuss the trade-off between the attack performance and efficiency, as well as show that the transferability of the jailbreak prompts is still viable.
- Score: 28.58973312098698
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Misuse of the Large Language Models (LLMs) has raised widespread concern. To
address this issue, safeguards have been taken to ensure that LLMs align with
social ethics. However, recent findings have revealed an unsettling
vulnerability bypassing the safeguards of LLMs, known as jailbreak attacks. By
applying techniques, such as employing role-playing scenarios, adversarial
examples, or subtle subversion of safety objectives as a prompt, LLMs can
produce an inappropriate or even harmful response. While researchers have
studied several categories of jailbreak attacks, they have done so in
isolation. To fill this gap, we present the first large-scale measurement of
various jailbreak attack methods. We concentrate on 13 cutting-edge jailbreak
methods from four categories, 160 questions from 16 violation categories, and
six popular LLMs. Our extensive experimental results demonstrate that the
optimized jailbreak prompts consistently achieve the highest attack success
rates, as well as exhibit robustness across different LLMs. Some jailbreak
prompt datasets, available from the Internet, can also achieve high attack
success rates on many LLMs, such as ChatGLM3, GPT-3.5, and PaLM2. Despite the
claims from many organizations regarding the coverage of violation categories
in their policies, the attack success rates from these categories remain high,
indicating the challenges of effectively aligning LLM policies and the ability
to counter jailbreak attacks. We also discuss the trade-off between the attack
performance and efficiency, as well as show that the transferability of the
jailbreak prompts is still viable, becoming an option for black-box models.
Overall, our research highlights the necessity of evaluating different
jailbreak methods. We hope our study can provide insights for future research
on jailbreak attacks and serve as a benchmark tool for evaluating them for
practitioners.
Related papers
- Virtual Context: Enhancing Jailbreak Attacks with Special Token Injection [54.05862550647966]
This paper introduces Virtual Context, which leverages special tokens, previously overlooked in LLM security, to improve jailbreak attacks.
Comprehensive evaluations show that Virtual Context-assisted jailbreak attacks can improve the success rates of four widely used jailbreak methods by approximately 40%.
arXiv Detail & Related papers (2024-06-28T11:35:54Z) - Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs [13.317364896194903]
Large Language Models (LLMs) have demonstrated significant capabilities in executing complex tasks in a zero-shot manner.
They are susceptible to jailbreak attacks and can be manipulated to produce harmful outputs.
arXiv Detail & Related papers (2024-06-13T17:01:40Z) - EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models [53.87416566981008]
This paper introduces EasyJailbreak, a unified framework simplifying the construction and evaluation of jailbreak attacks against Large Language Models (LLMs)
It builds jailbreak attacks using four components: Selector, Mutator, Constraint, and Evaluator.
Our validation across 10 distinct LLMs reveals a significant vulnerability, with an average breach probability of 60% under various jailbreaking attacks.
arXiv Detail & Related papers (2024-03-18T18:39:53Z) - Tastle: Distract Large Language Models for Automatic Jailbreak Attack [9.137714258654842]
We propose a black-box jailbreak framework for automated red teaming of large language models (LLMs)
Our framework is superior in terms of effectiveness, scalability and transferability.
We also evaluate the effectiveness of existing jailbreak defense methods against our attack.
arXiv Detail & Related papers (2024-03-13T11:16:43Z) - How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to
Challenge AI Safety by Humanizing LLMs [66.05593434288625]
This paper introduces a new perspective to jailbreak large language models (LLMs) as human-like communicators.
We apply a persuasion taxonomy derived from decades of social science research to generate persuasive adversarial prompts (PAP) to jailbreak LLMs.
PAP consistently achieves an attack success rate of over $92%$ on Llama 2-7b Chat, GPT-3.5, and GPT-4 in $10$ trials.
On the defense side, we explore various mechanisms against PAP and, found a significant gap in existing defenses.
arXiv Detail & Related papers (2024-01-12T16:13:24Z) - A Wolf in Sheep's Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily [51.63085197162279]
Large Language Models (LLMs) are designed to provide useful and safe responses.
adversarial prompts known as 'jailbreaks' can circumvent safeguards.
We propose ReNeLLM, an automatic framework that leverages LLMs themselves to generate effective jailbreak prompts.
arXiv Detail & Related papers (2023-11-14T16:02:16Z) - Jailbreaking Black Box Large Language Models in Twenty Queries [97.29563503097995]
Large language models (LLMs) are vulnerable to adversarial jailbreaks.
We propose an algorithm that generates semantic jailbreaks with only black-box access to an LLM.
arXiv Detail & Related papers (2023-10-12T15:38:28Z) - "Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models [50.22128133926407]
We conduct a comprehensive analysis of 1,405 jailbreak prompts spanning from December 2022 to December 2023.
We identify 131 jailbreak communities and discover unique characteristics of jailbreak prompts and their major attack strategies.
We identify five highly effective jailbreak prompts that achieve 0.95 attack success rates on ChatGPT (GPT-3.5) and GPT-4.
arXiv Detail & Related papers (2023-08-07T16:55:20Z) - Tricking LLMs into Disobedience: Formalizing, Analyzing, and Detecting Jailbreaks [12.540530764250812]
We propose a formalism and a taxonomy of known (and possible) jailbreaks.
We release a dataset of model outputs across 3700 jailbreak prompts over 4 tasks.
arXiv Detail & Related papers (2023-05-24T09:57:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.