A StrongREJECT for Empty Jailbreaks
- URL: http://arxiv.org/abs/2402.10260v1
- Date: Thu, 15 Feb 2024 18:58:09 GMT
- Title: A StrongREJECT for Empty Jailbreaks
- Authors: Alexandra Souly, Qingyuan Lu, Dillon Bowen, Tu Trinh, Elvis Hsieh,
Sana Pandey, Pieter Abbeel, Justin Svegliato, Scott Emmons, Olivia Watkins,
Sam Toyer
- Abstract summary: There is no standard benchmark for measuring the severity of a jailbreak.
We present StrongREJECT, which better discriminates between effective and ineffective jailbreaks.
- Score: 74.66228107886751
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise of large language models (LLMs) has drawn attention to the existence
of "jailbreaks" that allow the models to be used maliciously. However, there is
no standard benchmark for measuring the severity of a jailbreak, leaving
authors of jailbreak papers to create their own. We show that these benchmarks
often include vague or unanswerable questions and use grading criteria that are
biased towards overestimating the misuse potential of low-quality model
responses. Some jailbreak techniques make the problem worse by decreasing the
quality of model responses even on benign questions: we show that several
jailbreaking techniques substantially reduce the zero-shot performance of GPT-4
on MMLU. Jailbreaks can also make it harder to elicit harmful responses from an
"uncensored" open-source model. We present a new benchmark, StrongREJECT, which
better discriminates between effective and ineffective jailbreaks by using a
higher-quality question set and a more accurate response grading algorithm. We
show that our new grading scheme better accords with human judgment of response
quality and overall jailbreak effectiveness, especially on the sort of
low-quality responses that contribute the most to over-estimation of jailbreak
performance on existing benchmarks. We release our code and data at
https://github.com/alexandrasouly/strongreject.
Related papers
- Test-Time Immunization: A Universal Defense Framework Against Jailbreaks for (Multimodal) Large Language Models [80.66766532477973]
Test-time IMmunization (TIM) can adaptively defend against various jailbreak attacks in a self-evolving way.<n>Test-time IMmunization (TIM) can adaptively defend against various jailbreak attacks in a self-evolving way.
arXiv Detail & Related papers (2025-05-28T11:57:46Z) - Foot-In-The-Door: A Multi-turn Jailbreak for LLMs [40.958137601841734]
A key challenge is jailbreak, where adversarial prompts bypass built-in safeguards to elicit harmful disallowed outputs.
Inspired by psychological foot-in-the-door principles, we introduce FITD,a novel multi-turn jailbreak method.
Our approach progressively escalates the malicious intent of user queries through intermediate bridge prompts and aligns the model's response by itself to induce toxic responses.
arXiv Detail & Related papers (2025-02-27T06:49:16Z) - xJailbreak: Representation Space Guided Reinforcement Learning for Interpretable LLM Jailbreaking [32.89084809038529]
Black-box jailbreak is an attack where crafted prompts bypass safety mechanisms in large language models.
We propose a novel black-box jailbreak method leveraging reinforcement learning (RL)
We introduce a comprehensive jailbreak evaluation framework incorporating keywords, intent matching, and answer validation to provide a more rigorous and holistic assessment of jailbreak success.
arXiv Detail & Related papers (2025-01-28T06:07:58Z) - Layer-Level Self-Exposure and Patch: Affirmative Token Mitigation for Jailbreak Attack Defense [55.77152277982117]
We introduce Layer-AdvPatcher, a methodology designed to defend against jailbreak attacks.
We use an unlearning strategy to patch specific layers within large language models through self-augmented datasets.
Our framework reduces the harmfulness and attack success rate of jailbreak attacks.
arXiv Detail & Related papers (2025-01-05T19:06:03Z) - Shaping the Safety Boundaries: Understanding and Defending Against Jailbreaks in Large Language Models [59.25318174362368]
Jailbreaking in Large Language Models (LLMs) is a major security concern as it can deceive LLMs to generate harmful text.
We conduct a detailed analysis of seven different jailbreak methods and find that disagreements stem from insufficient observation samples.
We propose a novel defense called textbfActivation Boundary Defense (ABD), which adaptively constrains the activations within the safety boundary.
arXiv Detail & Related papers (2024-12-22T14:18:39Z) - JailbreakLens: Interpreting Jailbreak Mechanism in the Lens of Representation and Circuit [21.380057443286034]
Large language models (LLMs) are vulnerable to jailbreak attacks.
Jailbreak attacks are prevalent, but the understanding of their underlying mechanisms remains limited.
arXiv Detail & Related papers (2024-11-17T16:08:34Z) - Rapid Response: Mitigating LLM Jailbreaks with a Few Examples [13.841146655178585]
We develop rapid response techniques to look to block whole classes of jailbreaks after observing only a handful of attacks.
We evaluate five rapid response methods, all of which use jailbreak proliferation.
Our strongest method reduces attack success rate by a factor greater than 240 on an in-distribution set of jailbreaks and a factor greater than 15 on an out-of-distribution set.
arXiv Detail & Related papers (2024-11-12T02:44:49Z) - Stealthy Jailbreak Attacks on Large Language Models via Benign Data Mirroring [47.40698758003993]
We propose an improved transfer attack method that guides malicious prompt construction by locally training a mirror model of the target black-box model through benign data distillation.
Our approach achieved a maximum attack success rate of 92%, or a balanced value of 80% with an average of 1.5 detectable jailbreak queries per sample against GPT-3.5 Turbo.
arXiv Detail & Related papers (2024-10-28T14:48:05Z) - EnJa: Ensemble Jailbreak on Large Language Models [69.13666224876408]
Large Language Models (LLMs) are increasingly being deployed in safety-critical applications.
LLMs can still be jailbroken by carefully crafted malicious prompts, producing content that violates policy regulations.
We propose a novel EnJa attack to hide harmful instructions using prompt-level jailbreak, boost the attack success rate using a gradient-based attack, and connect the two types of jailbreak attacks via a template-based connector.
arXiv Detail & Related papers (2024-08-07T07:46:08Z) - WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models [66.34505141027624]
We introduce WildTeaming, an automatic LLM safety red-teaming framework that mines in-the-wild user-chatbot interactions to discover 5.7K unique clusters of novel jailbreak tactics.
WildTeaming reveals previously unidentified vulnerabilities of frontier LLMs, resulting in up to 4.6x more diverse and successful adversarial attacks.
arXiv Detail & Related papers (2024-06-26T17:31:22Z) - JailbreakEval: An Integrated Toolkit for Evaluating Jailbreak Attempts Against Large Language Models [21.854909839996612]
Jailbreak attacks aim to induce Large Language Models (LLMs) to generate harmful responses for forbidden instructions.
There is (surprisingly) no consensus on how to evaluate whether a jailbreak attempt is successful.
JailbreakEval is a user-friendly toolkit focusing on the evaluation of jailbreak attempts.
arXiv Detail & Related papers (2024-06-13T16:59:43Z) - AutoJailbreak: Exploring Jailbreak Attacks and Defenses through a Dependency Lens [83.08119913279488]
We present a systematic analysis of the dependency relationships in jailbreak attack and defense techniques.
We propose three comprehensive, automated, and logical frameworks.
We show that the proposed ensemble jailbreak attack and defense framework significantly outperforms existing research.
arXiv Detail & Related papers (2024-06-06T07:24:41Z) - Rethinking How to Evaluate Language Model Jailbreak [16.301224741410312]
We propose three metrics, safeguard violation, informativeness, and relative truthfulness, to evaluate language model jailbreak.
We evaluate our metrics on a benchmark dataset produced from three malicious intent datasets and three jailbreak systems.
arXiv Detail & Related papers (2024-04-09T15:54:16Z) - JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models [123.66104233291065]
Jailbreak attacks cause large language models (LLMs) to generate harmful, unethical, or otherwise objectionable content.
evaluating these attacks presents a number of challenges, which the current collection of benchmarks and evaluation techniques do not adequately address.
JailbreakBench is an open-sourced benchmark with the following components.
arXiv Detail & Related papers (2024-03-28T02:44:02Z) - JailbreakRadar: Comprehensive Assessment of Jailbreak Attacks Against LLMs [26.981225219312627]
We present a large-scale evaluation of various jailbreak attacks.<n>We collect 17 representative jailbreak attacks, summarize their features, and establish a novel jailbreak attack taxonomy.
arXiv Detail & Related papers (2024-02-08T13:42:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.